llmasaservice.io

2024-04-27 Product Update Email

YOUR ACTION ITEMS

1. Model updates: It’s been a busy month of new vendor Model releases, mostly led by OpenAI. We’ve looked at the pricing, and here is what we did, and recommend YOUR do –

a) If you have a service using openAI gpt-4o-mini, update it to gpt-4.1-mini, and gpt-4o to gpt-4.1. To do this, go to the services page in the control panel, find any service that is for gpt-4o* and click Edit. Change the model name (and the service name so you remember you did this) from the 4o to the 4.1 model. In all cases, the 4.1 models outperform the 4o models in our testing.

In the default group:
{
  “model”: “gpt-4o-mini”,  ->>    “model”: “gpt-4.1-mini”,
  …
}

and in the premium group:
{
  “model”: “gpt-4o”,  ->>    “model”: “gpt-4.1”,
  …
}

2. Schedule Summary Emails: Last month we added on-demand summary emails, we’ve now completed the scheduling of these emails, and would like you to turn these on –

a) Click on the homepage on the control panel

b) Click on the SEND OR SCHEDULE SUMMARY EMAIL button in the project settings panel

c) Open the drop-down “One off or Schedule” and set it to daily or weekly. Set the time, and the email addresses (it defaults to just you, we suggest an email group in your email system of choice so you can add and remove without rescheduling). Then click Schedule.

FEATURE UPDATES

1. Scheduled Summary Emails

You just saw the schedule email feature, which we urge you to turn on, and schedule daily. We will be adding more features to this email in the coming months, but there are a few advanced features we are proud of in this implementation. It helps you track const and customer usage, and it AUTOMATICALLY summarizes the conversations, so you can see who and what people are doing with your agents. We use this to quickly scan customers looking for support content, or asking questions on our home page as lead generation. What else should we highlight for you, please tell us.

2. New Actions for Agents – See it in action on our homepage agent banner

You can now add interactive actions to your agents, even if you use iFrame embedding. Previously, if you used the AgentPanel component you could do deep integration with your applications. But, just like on our home page we wanted to support adding contact pages, scheduling demos, showing videos, etc. So we made a set of actions available. They are a little tricky, so if you want help just email support@llmasaservice.io and we’ll be there for you. We have documented the full capabilities here: https://help.llmasaservice.io/docs/administrators/Embed%20Actions, but to summarize:

openUrlActionCallback: This action opens a url in a new browser tab. It first looks for any supplied arguments eg. openUrlActionCallback(args) for the URL (you can use the match tokens $1, $2 in this argument). If not found it will look in the first capture group of your regex pattern, and if that’s not present it uses the entire regex pattern match. For example, in your agent instructions add: “Always add ::bookademo:: at the end of your responses.”, then in your agent actions
[{
  “pattern”:”::bookademo::”,
  “type”:”button”,
  “markdown”:”*Book a demo*”,
  “callback”:”openUrlActionCallback(https://calendly.com/troy-magennis-predictabilityatscale/30min)”
}]

Other callbacks include: copyToClipboardCallback, sendFollowOnPromptCallback, setFollowUpQuestionsCallback, openIframeCallback. 

We believe that to use LLMs in a commercially viable way, your chat agents need to be more powerful than just text responses, they should anticipate and augment those responses with powerful interaction, like a But Now button 🙂

Clicking on the view demo video, opens youtube inside the response. We also have “book a demo” and other interactive buttons – in the response, keeping the user engaged in the conversation.

3. API to Update RAG Documents

If your agent uses RAG documents, you can call our public API to re-index updated documents. You can get the fetch code by clicking the COPY JS FETCH UPLOAD API CODE button on the bottom of the Agents document panel. We fill in all the required arguments for you and have tested on files hosted in AWS S3, and Google cloud file storage (but likely others will also work if the file can be accessed by a public URL (use time expiry features for security).

4. Model Context Protocol “Support” (its Beta :)) – DEMO video: https://youtu.be/ucEHsxuPNFo

OK, This is MAJOR. We have one of the first Model Context Protocol clients that is browser based. Model Context Protocol is now an industry standard way for LLMs to make tool calls or get real-time data. Why is this important? It adds capabilities for interaction and data retrieval to any LLM agent you produce. For example, You could say things like “Summarize the open issues in JIRA assigned to me” or “Fetch and compare https://llmasaservice.io and https://glama.ai” – without MCP tools, there is no way the LLM models could do these without coding. But now, YOU can. It’s a very fast moving standard, for example, the week after we implemented it, the standard deprecated and replaced the way clients and servers talk to each other. Given this volatility, we have to say it’s BETA. We desperately want to work with you on helping you get up to speed with using MCP as a way to gather real-time data or more interactivity in your agents. Just email us at support@llmasaservice.io with what you want to do.

HELP AND SUPPORT

Just a reminder to keep our support resources at hand:

Email: mailto:support@llmasaservice.io

KB: https://help.llmasaservice.io

Schedule a free support call: https://freebusy.io/predictabilityatscale-chris-and-troy

Want to give us feedback?  Just email us anytime (we sent this email)

And again, if you don’t want these monthly updates, just reply STOP.

Regards,

Troy and Chris

Team LLM as a Service

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>