I had the pleasure of attending Boomi World 2024 this past week as a Boomi Partner. It was an exciting event as this is the first in-person Boomi World post-COVID. It was a whirlwind of rerouted plane rides due to inclement weather, copious amounts of food, hyped-up Boomi announcements, and brain-twisting technical discussions. The theme of Boomi World this year was “the power of connection”, and I did indeed make many connections at this year’s event. I had a great time meeting the diverse array of Boomi customers and partners. I was able to learn of all the exciting things people are doing with Boomi, along with their challenges. It was a great learning experience, but the most fun aspect was being able to meet up with people face-to-face. I was able to reconnect with former clients and co-workers as well as the wider Boomi community. I feel we, as humans, crave that in-person connection. Being able to shake someone’s hand or give them a long-deserved hug was so satisfying for me this past week. It was so eye-opening for me to learn about everyone’s journey with Boomi over the years and how their career path has drastically changed because of Boomi.
Now that Boomi World 2024 is over and I’m sitting on the plane a couple of inches rounder and several pounds heavier, I wanted to reflect on what left a lasting impression on me this Boomi World.
- Boomi AI and AI Usage in General – We’re scratching the surface of what we can do with AI at this point, but Boomi is heavily invested in evolving the platform and its AI capabilities (which is good for you if you are a Boomi customer). While AI is still maturing, businesses need to start exploring this now and prepare themselves for the dominance of AI, or they risk being left behind. The great thing about all of this is that Boomi has positioned itself at the forefront of this “AI big bang,” as Steve Lucas, Boomi’s CEO, puts it. Boomi will enable businesses to integrate with AI and easily connect their data to the different AI offerings.
- Boomi Integrated AI Agents and Agent Garden – While I still need to fully wrap my head around the concept of AI Agents and the Agent Garden, AI Agents are supposed to be automated minions that work within your organization’s Boomi tech stack and help you automate tasks ranging from IT onboarding to finance approval workflows. Boomi did not go into how it works or any of the technical details on what would drive it, other than it would be leveraging different AI models to help drive these automation tasks. It’s supposedly the way businesses will run in the near future, and you can download and integrate these different AI agents from the curated Agent Garden within the Boomi platform. I will be keeping a close eye on this as more information is released in the future.
- Boomi API Management – Boomi announced the acquisition of two companies, Mashery and APIIDA, to beef up their API management offering. While the technical implementations of these products to the Boomi API Management module are still unclear, what we do know is that Boomi API Management can now be used for third-party API management and not just the APIs you create within Boomi. This is a massive step for Boomi in the API management space, as it allows Boomi to offer a much more comprehensive solution for organizations that have a wide variety of APIs across their IT landscape.
- Boomi Dedicated Cloud Service (DCS) – This is a new product offering that was sorely missing from Boomi’s hosting solution. Boomi will now host your Boomi runtime environments and manage those servers much like the Managed Cloud Service (MCS). However, the big difference here is you’re allowed VPN and remote access to these servers to upload configuration/properties files. Think something along the lines of SAP JCO additional properties file, Active Directory configuration file, etc… You get all the benefits of MCS where Boomi takes care of security patching, upgrades, and disaster recovery, but now with the added benefit of being able to access and manage your servers for those exceptional cases that require that level of access.
Artificial Intelligence – Boomi AI, Open AI, and AI User Agents
While there were many great discussions and new things announced at Boomi World this year, the biggest thing on everyone’s mind was Artificial Intelligence. We’re so used to having information and automation at our fingertips in our daily lives that it was only a matter of time until that would bleed over to the business world. Things such as turning on the lights in your house are just a voice command away to Alexa or Google Home. My kids don’t open up a tablet or computer to look up the weather anymore. They ask Alexa if it’s cold outside. The ease with which we access information and do things is now at the forefront of the business world’s mind, and AI is how we get there.
However, I will say that we’re very much on the ground floor at this point. We’re all still exploring and learning what AI is capable of and how to use it while alleviating genuine AI concerns around security, privacy, ethics, and more.
Boomi AI
Boomi announced Boomi AI, a generative AI that would help you create your integration canvas for faster integration development. You can type into an AI chat box something like “Create a Customer integration between Salesforce and Netsuite for me.” The AI would generate a canvas with a connection to Salesforce to pull the customer data object (operation and profile), a connection to Netsuite (operation and profile), and a map to transform that data using a long-existing mapping feature called Boomi Suggest.
While Boomi hyped this up as integration automation and the next stepping stone in integration development, I see this more as a helper/starter function that gives you a skeleton to work off of (at least for the immediate future). Every organization has its own set of data quirks, custom fields, data lookups, and business logic that needs to be considered. All of these things require a human element that is beyond what an AI is able to do at this point in time. Until organizations can truly automate their workflows end to end and ensure high data quality throughout, there will be a need for the traditional software development lifecycle with human developers. However, I feel that this greatly lowers the development barrier for those who are new to or are unfamiliar with Boomi. I believe that is a step in the right direction and that Boomi has the right vision when it comes to AI. I am excited to see how far they can take this, as it can propel and solidify Boomi’s position as a leader in the iPaaS industry.
What I am much more excited about is Boomi AI’s ability to generate documentation on Boomi processes. Even if the generated documentation does not have everything you would want or need, it gives developers a great template and structure to work off of. We now have a unified documentation template provided by Boomi AI that can be molded to meet the needs of each development team. I know that being overly excited about some documentation might seem odd, but as an architect and developer who does not enjoy doing documentation, this will give me a great starting point instead of having to build something from scratch. The task seems a little bit less daunting this way and gets my lazy butt to do documentation in a timely manner.
OpenAI & Other Large Language Models (LLM)
The other big buzz around AI at the Boomi World conference was using OpenAI and other LLM to drive your Boomi integrations and business workflows. People were definitely intrigued, as the AI sessions were packed full, with some sessions only having standing room if you came late. There were many sessions where it felt like people were forcing AI into things that didn’t benefit from having AI. However, since it’s the next big thing in the IT world, everyone was rushing to find a use for it. While AI is indeed the future, there are still technical hurdles, including security, privacy, and ethical risks, that need to be worked through. Boomi has positioned itself to be one of the main driving forces and will play a critical role in AI implementation with its ability to connect a wide array of systems together with the AI ecosystem.
Out of all the sessions, only two of the AI sessions stood out to me. The first one was a presentation by Multiquip, a construction and electrical generator manufacturer based in California. Their use case was a very interesting one and it was a worthwhile session to attend. Multiquip has a massive data store of all their equipment’s user manuals. These manuals are stored as PDFs, and they wanted an easy and more intuitive way of extracting data from these manuals. The vision is to allow their CSR, technicians, and to a certain extent, their customers to interact with a chatbot to easily pull data from the user manual using natural language. The goal is to have a more natural human interaction instead of just the information in the manual. Something to the kin of:
User: “What voltage can the SG1600C4F generator work with?”
Response: “The SG1600C4F studio generator is designed for the entertainment industry so it has a low noise output to help with filming. It is capable of outputting 120v, 208v, and 240v based on a selectable switch. This allows the generator to work with the North American, European and Asian electrical standards.”
While this seems like a rather simple interaction, the technical challenges that had to be overcome were non-trivial. I’ll try to give a high-level description of what is happening below, but just know that there is still a lot of work to be done and this was a proof of concept.
While humans can read PDFs just fine, computers only understand numbers and strings. In order to make the PDF user manuals readable to OpenAI, they first need to be converted to a format that the AI could understand. The Multiquip team leveraged OpenAI to convert their PDF files into a JSON data structure and vectors (numbers that represent keywords and data elements in each manual). Think of vectors in this particular use case as a fancy index. They then stored these vectors in a vector database, with references back to the JSON data that represents their user manual data. Each time a question comes through, they need to also convert that question into vectors so they can match up the keywords from the question (the model number SG1600C4F, voltage settings) to the vectors in the user manual and find the relevant information. All this information is then sent to OpenAI to formulate a natural and relevant response back to the user.
Full Workflow
This is what the full workflow looks like, with all the different systems that are involved.
- Question is asked – Chatbot via a Boomi Flow web page/portal.
- That question is sent from Boomi Flow to a Boomi Web Service on the Boomi Integration Platform.
- The Boomi integration process then calls OpenAI to pull out the keywords from the question, such as the model number, and convert it to a vector.
- The Boomi process then does a vector lookup against the vector database and pulls any matching or adjacent vectors (aka information from the user manual).
- The question is then sent to OpenAI again via the Boomi process, but with supplemental information from the user manual to help better answer the question. This is a technique called Retrieval-Augmented Generation (RAG) and is used when you don’t want to expose your private data to a public model such as OpenAI. You are still giving it bits of information as part of the RAG process, but you do not expose your entire dataset.
- Open AI takes in that question and the supplemental information and composes an appropriate response that is much more natural to the English language.
- The Boomi integration then takes that response and returns it to Boomi Flow, where the answer is displayed in the chat box to the user.
It is a great use case that can be leveraged by different organizations and I see this being expanded on in the future.
Another good AI session was about AI hallucinations (the answer from the AI doesn’t make sense or is factually inaccurate), and how a human in the loop workflow can help with that. We got to see a PoC implementation where employees could email questions to HR about their benefits or 401k, and they would get an email response back from a generative AI running in Amazon Bedrock. The Boomi integration platform was the driving force in this orchestration, grabbing the email questions from the email server, doing all the vectoring and RAG described in the Multiquip process above, and then prepping the email response. The key thing here, and what I feel most AI implementations sorely need, is a human in the loop to review the responses for data accuracy before they are approved to go out. AI still needs a massive amount of data for training before it can get to a level where it does not just make things up. Even with all the publicly available data on the internet, it is still not enough to train these LLM AIs. AI companies are desperately trying to get access to private data sets such as news articles, science publications, research papers, etc… to enhance and train them to be more accurate. Until then, though, the human element is the key to a successful AI implementation and usage.
Closing Thoughts
I had a great time at Boomi World 2024, but I think I would do a couple of things differently on the next go around. I will probably pick and choose my sessions a lot more carefully to ensure I cut out some of the fluff and focus on sessions with “more meaty stuff.”
Another thing I would do is wear some really comfortable shoes. There was a good amount of standing/walking, some of it by choice, some of it not. Either way, I have to work off all of this food. The pants are getting quite tight after this Boomi World, and I’m too old and grumpy now to go clothes shopping. Sweatpants it is for the foreseeable future. Thank goodness for working from home.
Fun Fact: In the spirit of this blog post and in keeping with the discussions around AI, this image representing me struggling to work off the pounds was generated using AI.
For more information, please contact us!