FutureAI Releases Masterpiece

FutureAI is releasing Masterpiece, a generative application architecture for developers. Think of Masterpiece as Waymo for your application’s interface. It’s akin to adding all of the cameras a car needs to self-drive, except Masterpiece is what self-drives your application’s interface for your users. It is a poly-agentic generative architecture that leverages a specialized version of Llama 70B: FutureAI’s Leo 1 with two embedding modalities, both text and multi-modal, and three agents in total to ingest a user’s data, vectorize it, cluster it, and then contextualize it to pair with your dataset. Masterpiece is the first dual-dataset architecture to make an application generative in nature. It allows your application to deliver a generative interface to users, instead of one retrieved from your database that was created by a human.

Begin building with Masterpiece today to integrate it into your application’s architecture. Start the journey of transforming your application into a fully generative state, beginning with the user interface. We have created an SDK for you to easily integrate components to prompt users for a generative experience by connecting their data to FutureAI. FutureAI includes two datasets with Masterpiece. Our team is ready and waiting to help fine tune Masterpiece’s results for your application’s specific use case.

Getting Started and Choosing Which Version of Leo 1 to Provision with Masterpiece

Masterpiece leverages FutureAI’s specialized contextualization AI model Leo 1. It’s specialized for contextualizing email and transaction data from a user. Leo 1 is integrated with Masterpiece as its brain to make contextual sense of the most relevant user cluster. It uses this context for pairing the user’s clusters with your dataset for determining what to generate for the user interface.

The first decision to make when integrating Masterpiece is to determine which version of Leo 1 you want to provision with Masterpiece. Out-of-the-box Masterpiece comes with our base version of Leo 1. You can use our base version immediately without leveraging any of your data to adapt or fine-tune it. This is a performant version of Leo 1 and we expect your results to be highly accurate in a generalized sense. If you would like to make Leo 1 more refined and adapted to your dataset and application’s use case, we suggest LoRa adapting Leo 1 to help it understand the type of data it will pair user data with, and what your application does. This will provide a more refined version of Leo 1 delivering better results for your users.

To go one step further, and to get the most performant version of Leo 1, we suggest fine-tuning Leo 1 with your historical data and interaction events for Leo 1 to start with a strong foundational understanding of how users have historically interacted with your application. Fine-tuning Leo 1 with your data will result in the most refined version, generating the best results. We are here to LoRa adapt or fine-tune Leo 1 for you so there is no extra work on your end and to ensure the best results.

Generating your API Key and Implementing FutureAI’s SDK

Once you’ve decided which version of Leo 1 you want to provision Masterpiece with, it's time to generate your API Key and implement FutureAI’s SDK into the front-end of your application. To generate your API key go to Developers on FutureAI and create an account. Once your account is approved, you will be able to generate an API. Next, use our Documentation to integrate our SDK, and select which dataset you want to prompt users to connect to FutureAI and generate your application. Once you’ve generated an API key and selected a dataset, it’s time to integrate our SDK and design a generative experience.

We’ve created a custom FutureAI banner and button, as we want all users to receive the same FutureAI generative experience.

future ai banner

Within your Developer account, you’ll find all of FutureAI’s component library including the FutureAI banner and button to integrate with your application.

Understanding the difference between a user’s data and Masterpiece’s generative results

Building trust with users is critical and we believe in over communicating on how their data will be used, what data will share with FutureAI, and what data FutureAI is sharing with your application. Your application’s users must explicitly consent to connect their data to FutureAI, and we prompt users often to provide this permission. When a user signs in with FutureAI to get a generative experience in your application, they are connecting their data to FutureAI, not your application. This data is encrypted and stored within FutureAI. Only data that the user has explicitly consented to will be shared with your application.

The data that FutureAI ingests is cleansed to remove sensitive data, then it’s vectorized for clustering, and contextualized for pairing with your dataset. Unless the user explicitly consents, only the generative results and results that Masterpiece chooses to generate from your dataset will be returned by our SDK, not the underlying reason of why they were chosen, or what data went building the Generation. This is important to note when understanding what will be returned from FutureAI’s SDK and what will be presented to your users.

Each Generation by Masterpiece is what we consider as the generative results and data we return from our SDK to you. It will be your dataset in the format Masterpiece determined to be the most relevant to the user. In cases where there is user data shared with your application, such as a specific transaction or merchant name, or a user connected to a cluster, users will explicitly give consent to sharing this data with you and should be used only for the purposes of providing a generative experience.

Sensitive Data Attributes and FutureAI’s Data Governance and Privacy Policy

When designing Masterpiece we had to think about how to segment out sensitive data that is within a user’s dataset. FutureAI’s Data Governance and Privacy Policy is the set of criteria and attributes that we sift out or mask before any embedding happens. The attributes that go into determining sensitive data-masking and sifting are considered sensitive and private to a user are also excluded when creating a Generation.

Sensitive and Private Data Attributes Excluded When Creating a Generation:

1. Toxic
2. Derogatory
3. Violent
4. Sexual
5. Insult
6. Profanity
7. Death, Harm & Tragedy
8. Firearms & Weapons
9. Health
10. Religion & Belief
11. Illicit Drugs
12. War & Conflict
13. Finance
14. Politics
15. Legal

Each of these attributes Masterpiece looks for when we begin ingesting user data that’s masked or completely not ingested. There are multiple steps that go into this step within Masterpiece’s Generation. This includes text moderation, Data Loss Prevention and the use of Llama 70B to review the data once it passes through these steps for a final review of masking or removing the data before it is ingested. We reserve the right to continuously review FutureAI’s Data Governance and Privacy Policy on an on-going basis. FutureAI will continue to refine the data we determine to be sensitive and private to a user. This data will not be used by Masterpiece in a Generation.

Composing a Generation with Masterpiece and FutureAI’s Privately Hosted GPU Architecture

Every time a user signs in with FutureAI, it begins the user data ingestion pipeline within Masterpiece and starts the agentic journey for a Generation. When Masterpiece creates a Generation it will return via the SDK the results to be presented to the user. To compose Generations in a timeframe acceptable by a user, we built Masterpiece on our own privately hosted GPU architecture leveraging NVIDIA H100s. We will return Generations by Masterpiece within 5,000 milliseconds back to you.

Today, Generations are within the same modality that are created by you. We are already developing Leo 2 which will allow you to prompt it to create a Generation that Masterpiece believes is the most optimized modality for the user. Leo 2 will allow it to agentically choose the modality of the content, whether it’s text, image or video. This will allow it to generate the content and the language in which it recognizes the user speaks. Our vision for Leo 2 is to get a fully generative state “FGS” where all the content presented to the user is generated by Masterpiece and Leo, so each version of your application is generative, and in a format most desired by the user.

How we calculate Input MTok and Output MTok and Price Consumption of Masterpiece

Each time Masterpiece creates a Generation it is using the tokens from a user that are relevant for the Generation and prompts Leo with it. This is what is considered Input MTok within Masterpiece and pricing consumption for Input MTok starts at $19 per MTok. For each Generation, Masterpiece generates the content from your dataset that it has determined is the most optimized for the user. This is what consists of Output MTok, and we are starting Output MTok at $75. For both Input and Output MTok, we do not take into consideration spacing, blanks or empty tokens.

To start building, you can Pay Per MTok and for an Enterprise tier of Masterpiece and a FutureAI implementation team will support you. This on-going interaction with our architecture team will provide your team with the latest updates and early releases for Masterpiece. FutureAI offers discounts on MTok when you sign an enterprise agreement with a revenue commitment.

FutureAI Raises $5.8m in Seed Funding and Partners with Google for AI Model for Generative Interfaces

FutureAI will usher in a new era of development with the launch of Leo 1, our new poly-agentic AI architecture for building generative applications and user interfaces. Leo 1 utilizes future-looking data, enabling developers to create bespoke user experiences designed to strengthen engagement and increase conversion rates.

Our $5.8m Seed Round and AI Partnership with Google Cloud to Unleash New Generative AI Era for Developers to Leverage Private User Data

As part of the Leo 1 launch, we are announcing $5.8m in seed funding led by PivotNorth Capital, Village Global and Jay McGraw’s JPM Capital.

“Our AI model Leo 1 is a massive shift in AI that enables developers to create opt-in applications that deliver a personalized experience to every user with a single click,” said Lee Hnetinka, FutureAI’s founder and chief executive officer. “Up until now, application development and user interfaces have hinged on clicks and data science, leveraging historical data. Applications have often leveraged other user’s behavior and machine learning techniques like collaborative filtering and content-based filtering to build recommender algorithms that drive the determination for generating the interface. These algorithms drive a prediction of what is likely to perform well and these predictions are passed along to a marketer who is in charge of leveraging data to make a final decision of what is shown to the user and build the collections for the user interface. In some instances this is done manually, and in other instances it is done algorithmically, but in both instances this can only be done using historical data, and do not leverage future-looking data. This is all changing now with our new generative AI model which combines a user’s data and a developer’s data to generate an application generatively and personally.”

“Generative AI can help developers create highly personalized experiences in applications and bring users more relevant and helpful content,” said Dr. Ali Arsanjani, Director, AI/ML Partner Engineering at Google Cloud. “FutureAI’s decision to build on Google Cloud means its teams will have reliable access to our AI optimized infrastructure, compute, and AI tooling as they go about training and scaling AI models, and we’re pleased that Google Cloud’s technology will support FutureAI’s mission.”

Creating an Architecture to Ensure Privacy and Security for Each User to opt-in and Connect External User Data Starting with their Gmail and Plaid Transaction Data to Receive a Highly Personalized Experience.

Leo 1 is designed with an all new poly-agentic AI architecture that encompasses three agents for leveraging external user data in a private and secure opt-in manner, with our users being able to opt-in to first being able to connect their Gmail and Plaid data to create a highly personalized experience. By enabling developers to build their application generatively, users are delivered a more personal experience increasing the metrics developers look to optimize such as conversion, transaction frequency and size and customer life-time value.

The first agent is an LLM Sift to remove any private tokens that aren't used for generating the application and interface for a user. The data we remove is any data that falls into categories such as financial, health, and a handful of other categories that we deem private to users and do not use to generate a user’s interface. This data is identified and removed before we ingest a user’s data. We use a combination of text moderation scoring, data loss prevention and an LLM sifting technique to ensure these tokens are not ingested and used for generating the application and user’s interface. All data that users share is user consented and users have full control over the data they share with FutureAI.

Our second agent is the contextualization of the user’s tokens and clustering. This agent uses a combination of RAG and LLM contextualization to cluster the user’s tokens and ready them for pairing with a developer’s tokens.

The third agent in our architecture is the pairing method to pair a user’s tokens with a developer’s tokens to generate their user interface for the application. This agent is our largest LLM out of all three agents and is our AI model, Leo 1. Leo 1 has been trained for generating user interfaces with personalization at its core.

The combination of all three of these agents is what forms our poly-agentic architecture and allows Leo 1 to generate an application’s user interface leveraging opt-in access for users’ Gmail and Plaid data while keeping user data private and secure in our own GPU clusters and ecosystem. To ensure all user and developer data is kept private and secure, we host our entire architecture ourselves, ensuring no user or developer data is passed back to any third party, or used for training any LLM.

Beginning the Migration from a Graphical User Interface to an Automated Intelligent User Interface (AII) Architecture

To build an application with a generative interface we use tokens instead of clicks, and these tokens are users’ Gmail information and Plaid transaction data that gives us visibility into what is relevant to a user. It allows us to take into consideration another dimension of a user that a single developer isn’t able to access, and this enables a generative interface which is more personalized to the user.

When ingesting user data we ingest all modalities of data, text, images, and email attachments and use these to build a rich persona of a user. The specificity of data ranges from specific colors of a sneaker that a user likes, to the ingredients we see them ordering frequently, or conversations they are having with a partner or personal trainer. We use these tokens to migrate from a predictive method to a generative method for generating a user’s interface which leverages semantic reasoning. For example, Leo 1 reads the ingredients and menus for a restaurant, it looks at the product photos and descriptions, and uses these tokens to generate the interface for each and every person.

With this migration of semantic level generation of an interface, we allow the transition from a graphical user interface which engineers build themselves, to relying on Leo 1 to build the interface for them, and for their application autonomously.

“Self-Driving” For Your Application’s Interface

Generative interfaces for an application requires a tremendous amount of trust by users and developers. For users it requires trust to be built around user consented sharing of data. Each time a user shares data with FutureAI, we ask for the user’s consent and ensure they know what they are sharing. This consent is the basis for user control of their data.

Developers require trust in an AI to generate their interface without telling them why. When we generate the user interface for the developer’s application, what we are generating is the interface with their data based on the user’s data, without sharing the user’s data with them. For instance, for an application that allows a user to book restaurants, we are generating the restaurants relevant to the specific user based on the menus Leo 1 has read. For a brand that sells sneakers, we are generating an interface based on a user’s tokens that provides the data about the tennis shoes and hiking boots important to the user based on conversations between Leo 1 and their data. For a publication, we generate the user’s interface based on their tokens providing us with their most relevant articles.

All of this involves an immense amount of trust on both sides – the users with FutureAI and developers with FutureAI. This is why we think of FutureAI as “self-driving for your application’s interface,” because we all trust an AI to drive for us, and we do it because we believe users will trust FutureAI, and developers will trust FutureAI to generate their interface for them.

Developing Generatively Means Developing Better, Faster, Cheaper, and Personalized

FutureAI has been the culmination of two years in the making, and that’s because it has taken a combination of the right technology, the right team and timing to introduce a breakthrough in how the interface is generated for an application,” added Hnetinka. “To quantify our investment we look at what developers and users stand to gain with generative applications and interfaces.”

On the developer side, the cost basis before FutureAI for generating their interface involves a data science team, data analysis tool and marketer, which combined costs about ~$560k annually. Such teams are tasked with determining the optimal content and products for a user’s interface, and the outcome is built predictively. It takes time for these teams to build the interface and the reinforcement learning for A/B testing, and the interface isn’t as real-time as an organization would want it to be. To compare, developers can build their application generatively with Leo 1 for a fraction of the cost. Today we are making Leo 1 available for $19 per Input MTok and $75 per Output MTok, enabling developers to adopt our new poly-agentic architecture and integrate it into their application with minimal upfront investment. We aim to generate each application’s interface each time in ~5,000ms and today, Leo 1 is available via our API and on Google Cloud Marketplace.

Migrating to a generative application and interface architecture for users results in a personalized experience– an interface made just for them. This is the holy grail of development and is now possible with Leo 1. Delivering a personalized interface and application creates an increase in metrics across the board, such as conversion rates, transaction sizes, frequency of visit, and customer life-time value. The increase of these metrics provides an organization with a look at FutureAI holistically across engineering teams, data teams, marketing teams and finance teams giving us the ability to provide an increased adoption and install rate of FutureAI within every single developer ecosystem.

Generating Masterpieces

We named our AI model after Leonardo da Vinci because he pioneered and refined the techniques and methods for how art was generated, leading to the creation of the Mona Lisa. We take inspiration from da Vinci’s innovations in Leo 1, aiming to deliver to developers the tools to generate a masterpiece each time they generate an interface for their user.

If you are interested in getting access to Leo 1, tell us what you will build with it to request developer access.

If you’re interested in building the next version of Leo 1, We’re Hiring.