FutureAI Releases Masterpiece

FutureAI is releasing Masterpiece, a generative application architecture for developers. Think of Masterpiece as Waymo for your application’s interface. It’s akin to adding all of the cameras a car needs to self-drive, except Masterpiece is what self-drives your application’s interface for your users. It is a poly-agentic generative architecture that leverages a specialized version of Llama 70B: FutureAI’s Leo 1 with two embedding modalities, both text and multi-modal, and three agents in total to ingest a user’s data, vectorize it, cluster it, and then contextualize it to pair with your dataset. Masterpiece is the first dual-dataset architecture to make an application generative in nature. It allows your application to deliver a generative interface to users, instead of one retrieved from your database that was created by a human.

Begin building with Masterpiece today to integrate it into your application’s architecture. Start the journey of transforming your application into a fully generative state, beginning with the user interface. We have created an SDK for you to easily integrate components to prompt users for a generative experience by connecting their data to FutureAI. FutureAI includes two datasets with Masterpiece. Our team is ready and waiting to help fine tune Masterpiece’s results for your application’s specific use case.

Getting Started and Choosing Which Version of Leo 1 to Provision with Masterpiece

Masterpiece leverages FutureAI’s specialized contextualization AI model Leo 1. It’s specialized for contextualizing email and transaction data from a user. Leo 1 is integrated with Masterpiece as its brain to make contextual sense of the most relevant user cluster. It uses this context for pairing the user’s clusters with your dataset for determining what to generate for the user interface.

The first decision to make when integrating Masterpiece is to determine which version of Leo 1 you want to provision with Masterpiece. Out-of-the-box Masterpiece comes with our base version of Leo 1. You can use our base version immediately without leveraging any of your data to adapt or fine-tune it. This is a performant version of Leo 1 and we expect your results to be highly accurate in a generalized sense. If you would like to make Leo 1 more refined and adapted to your dataset and application’s use case, we suggest LoRa adapting Leo 1 to help it understand the type of data it will pair user data with, and what your application does. This will provide a more refined version of Leo 1 delivering better results for your users.

To go one step further, and to get the most performant version of Leo 1, we suggest fine-tuning Leo 1 with your historical data and interaction events for Leo 1 to start with a strong foundational understanding of how users have historically interacted with your application. Fine-tuning Leo 1 with your data will result in the most refined version, generating the best results. We are here to LoRa adapt or fine-tune Leo 1 for you so there is no extra work on your end and to ensure the best results.

Generating your API Key and Implementing FutureAI’s SDK

Once you’ve decided which version of Leo 1 you want to provision Masterpiece with, it's time to generate your API Key and implement FutureAI’s SDK into the front-end of your application. To generate your API key go to Developers on FutureAI and create an account. Once your account is approved, you will be able to generate an API. Next, use our Documentation to integrate our SDK, and select which dataset you want to prompt users to connect to FutureAI and generate your application. Once you’ve generated an API key and selected a dataset, it’s time to integrate our SDK and design a generative experience.

We’ve created a custom FutureAI banner and button, as we want all users to receive the same FutureAI generative experience.

future ai banner

Within your Developer account, you’ll find all of FutureAI’s component library including the FutureAI banner and button to integrate with your application.

Understanding the difference between a user’s data and Masterpiece’s generative results

Building trust with users is critical and we believe in over communicating on how their data will be used, what data will share with FutureAI, and what data FutureAI is sharing with your application. Your application’s users must explicitly consent to connect their data to FutureAI, and we prompt users often to provide this permission. When a user signs in with FutureAI to get a generative experience in your application, they are connecting their data to FutureAI, not your application. This data is encrypted and stored within FutureAI. Only data that the user has explicitly consented to will be shared with your application.

The data that FutureAI ingests is cleansed to remove sensitive data, then it’s vectorized for clustering, and contextualized for pairing with your dataset. Unless the user explicitly consents, only the generative results and results that Masterpiece chooses to generate from your dataset will be returned by our SDK, not the underlying reason of why they were chosen, or what data went building the Generation. This is important to note when understanding what will be returned from FutureAI’s SDK and what will be presented to your users.

Each Generation by Masterpiece is what we consider as the generative results and data we return from our SDK to you. It will be your dataset in the format Masterpiece determined to be the most relevant to the user. In cases where there is user data shared with your application, such as a specific transaction or merchant name, or a user connected to a cluster, users will explicitly give consent to sharing this data with you and should be used only for the purposes of providing a generative experience.

Sensitive Data Attributes and FutureAI’s Data Governance and Privacy Policy

When designing Masterpiece we had to think about how to segment out sensitive data that is within a user’s dataset. FutureAI’s Data Governance and Privacy Policy is the set of criteria and attributes that we sift out or mask before any embedding happens. The attributes that go into determining sensitive data-masking and sifting are considered sensitive and private to a user are also excluded when creating a Generation.

Sensitive and Private Data Attributes Excluded When Creating a Generation:

1. Toxic
2. Derogatory
3. Violent
4. Sexual
5. Insult
6. Profanity
7. Death, Harm & Tragedy
8. Firearms & Weapons
9. Health
10. Religion & Belief
11. Illicit Drugs
12. War & Conflict
13. Finance
14. Politics
15. Legal

Each of these attributes Masterpiece looks for when we begin ingesting user data that’s masked or completely not ingested. There are multiple steps that go into this step within Masterpiece’s Generation. This includes text moderation, Data Loss Prevention and the use of Llama 70B to review the data once it passes through these steps for a final review of masking or removing the data before it is ingested. We reserve the right to continuously review FutureAI’s Data Governance and Privacy Policy on an on-going basis. FutureAI will continue to refine the data we determine to be sensitive and private to a user. This data will not be used by Masterpiece in a Generation.

Composing a Generation with Masterpiece and FutureAI’s Privately Hosted GPU Architecture

Every time a user signs in with FutureAI, it begins the user data ingestion pipeline within Masterpiece and starts the agentic journey for a Generation. When Masterpiece creates a Generation it will return via the SDK the results to be presented to the user. To compose Generations in a timeframe acceptable by a user, we built Masterpiece on our own privately hosted GPU architecture leveraging NVIDIA H100s. We will return Generations by Masterpiece within 5,000 milliseconds back to you.

Today, Generations are within the same modality that are created by you. We are already developing Leo 2 which will allow you to prompt it to create a Generation that Masterpiece believes is the most optimized modality for the user. Leo 2 will allow it to agentically choose the modality of the content, whether it’s text, image or video. This will allow it to generate the content and the language in which it recognizes the user speaks. Our vision for Leo 2 is to get a fully generative state “FGS” where all the content presented to the user is generated by Masterpiece and Leo, so each version of your application is generative, and in a format most desired by the user.

How we calculate Input MTok and Output MTok and Price Consumption of Masterpiece

Each time Masterpiece creates a Generation it is using the tokens from a user that are relevant for the Generation and prompts Leo with it. This is what is considered Input MTok within Masterpiece and pricing consumption for Input MTok starts at $19 per MTok. For each Generation, Masterpiece generates the content from your dataset that it has determined is the most optimized for the user. This is what consists of Output MTok, and we are starting Output MTok at $75. For both Input and Output MTok, we do not take into consideration spacing, blanks or empty tokens.

To start building, you can Pay Per MTok and for an Enterprise tier of Masterpiece and a FutureAI implementation team will support you. This on-going interaction with our architecture team will provide your team with the latest updates and early releases for Masterpiece. FutureAI offers discounts on MTok when you sign an enterprise agreement with a revenue commitment.