AI Innovators: Balancing Innovation and Responsibility in AI
Home » AI Innovators: Balancing Innovation and Responsibility in AI

AI Innovators: Balancing Innovation and Responsibility in AI

Welcome to our AI Innovators series, where we bring you insights from leading minds shaping the future of artificial intelligence. Romain Sestier, co-founder and CEO of StackOne, joins us today. 

With a strong background in AI and enterprise software, Romain shares his journey, insights on AI strategy, and thoughts on ethical standards in AI. 

Discover how StackOne uses AI to transform integration platforms, and explore Romain’s vision for the future. 

Remember to subscribe to AI Innovators, a bi-weekly interview series spotlighting influential leaders shaping the future and impact of AI in technology and business.

Q&A with Romain Sestier

Tell me a little bit about you and your background.

Romain: My name is Roman Sestier. I’m the co-founder and CEO of StackOne. I founded StackOne to change how integration platforms work. We use LLMs to write deterministic enterprise-grade integrations for B2B SaaS software. We’ve been in business for two years, selling to B2B SaaS companies and AI agent companies.

What do you like to do in your spare time? What are the things that you’re passionate about outside of work?

Romain: That’s a great question. I have two young kids – a two-year-old and a four-year-old. One of the things I care a lot about is creating a culture of high performance and ownership at StackOne, as we work towards some very ambitious goals. At the same time,  maintaining a work-life balance is what makes it a sustainable, happy environment in which people want to work in the long term.

I’m leading by example as much as possible, so I love spending time with them. To be honest, all sorts of things you do with two-year-olds and four-year-olds are kind of weird, but so much fun.

You talked about some of the AI that you’re already starting to leverage. What was that AI strategy? How did that look for StackOne?

Romain: The AI strategy for StackOne was shaped right when we decided to create the company. My co-founder and I were playing around with other business ideas based on LLMs when we realized we could write pretty decent code. 

The thesis was that we can write high-quality integrations with LLMs. One of the main issues with previous generations of integration platforms was that you either had a lot of breadth of integrations but not very much depth, or they would be vertically focused in one area and be super deep, but very narrow. As a result, we could never find an integration platform that suited our requirements for selling our software to enterprise customers. They needed us to integrate with literally everything under the sun.

We realized from the beginning that you needed a different architecture to utilize LLMs and write integrations faster. Today, we’ve built 200 integrations in under two years, and tripled the speed at which we’re building integrations, thanks to our own internal agent.

We’ve also got some custom training data for that agent. The second part of the AI strategy for StackOne revolves around the interfaces we’ve designed. APIs are good for traditional integrations, but agents require different interfaces native to LLMs. That’s what we’ve designed. Recently, we released Toolset, tools that AI agents can leverage in their reasoning and actions.

What was the biggest challenge with setting up these integrations? Was it getting the process in place and scaling, or is it unique to each application?

Romain: The most significant challenge with building integrations with LLMs is that they don’t know domain knowledge. They don’t know the documentation for enterprise tools: sometimes it’s gated, sometimes it’s outdated, or it gets refreshed very often, so their base training is not good enough. However, even scraping cannot handle such large amounts of data. 

Take Workday, for example. They have thousands of endpoints. There’s just too much data for them to handle. And so our thesis was, LLMs will get good at integrating the long tail of integrations. That’s going to get commoditized very quickly. But what’s going to be crucial is getting LLMs, which inherently have the problem of hallucinations and limited context windows, to be able to write these super complex, customized integrations for enterprise software like Workday, SAP, Oracle, ServiceNow, etc.

These are typically incredibly complex, and that’s what we specialize in. The biggest challenge was making that work, and we essentially decided to write the entire architecture from the ground up to handle those complex cases.

Do you use any specific LLM, or are you LLM agnostic? Can the customer switch back and forth?

Romain: Internally, we are not agnostic around our product and our own agent that writes the integrations – we pick the best LLMs for specific parts of the writing. Today, we use a combination of multiple models from ChatGPT, Anthropic, DeepSeek, and Gemini, all together in a chain. When it comes to our toolset product, which we sell to AI agent companies, we’re actually agnostic. The toolset can be used by any agent out there.

“We’re not trying to be LLM-agnostic for the sake of it—we chain together the best models available to generate better code, faster.”

Romain Sestier, Co-Founder and CEO of StackOne

You talked a little bit about your AI strategy and data architecture. Have you adopted any kind of data lake house, vector database, or other modern approach within StackOne?

Romain: We’ve used multiple tools. Right now, we’re using tools specialized in embeddings, and we’re using different tools for the vector databases. And the reason we’re using them is as part of our agent that writes integrations, that agent is actually trained on a couple of different datasets.

Number one is documentation that we refresh daily from the integrations themselves. So we scrape that information, vectorize it. We’ve built our custom scraper even for PDFs to extract API data and pass it in a very specific way that works better for retrieval.

That’s one thing. The other thing that we feed into it is all the data from the previous integrations that we’ve built. And that is actually one of the most impactful datasets from a quality perspective.

All that data gets vectorized as well and used to create new integrations. Every integration we build is better than the last one, and we get human feedback from our team and from our customers as part of the build of those integrations. So yes, we do use vector databases and embeddings as part of the agent that we build.

You mentioned these APIs and connections to these different applications. As they change their architecture, how does that change the way you connect to them?

Romain: One of our product’s core value propositions is that we’re faster and better at fixing issues when APIs change. 

First of all, there is a warning the vast majority of the time. You have time to make the change. And so the question becomes, what’s the best way to make that change in a way that’s not impactful and retrospectively compatible with some of the existing implementations, because we abstract these APIs. 

That’s where LLMs come in. This is the new model that’s available. This is the new endpoint. This is the new field that you should be using. It helps us speed up the time it takes to make those decisions. The second thing, and this is non-LLM-specific architecture, is continuous monitoring of all of our integrations, all of the endpoints that we have, and testing whenever something breaks. It automatically alerts us, which means that we can then be more responsive if that happens. But that’s the minority of the time.

You mentioned that you have your own AI agent writing the code. What checks and balances do you have to ensure its accuracy?

Romain: Having the right checks and balances was one of our main priorities from day one of the company for writing integrations, because we essentially set the challenge of using a system that does not output enterprise-grade code to write enterprise-grade code. And so the reality is, it was still humans writing the integrations for a long time, but with a coding assistant. 

We’re bringing it into actual code generation, but we still have a robust human-in-the-loop process. It is primarily still driven by humans, but thanks to our AI agent, the velocity at which we can do it and the depth and complexity of the integrations we can handle are much greater.

What are one or two exciting AI advancements you’re hearing about and considering incorporating into your product?

Romain: If you take a step back on the talk about AI agents in general or AI, there’s been a lot of hype around truly autonomous systems. And because we work a lot with companies that are building agents for enterprises and selling themselves as agents, I think a lot of the time, because of the early days that we’re in right now of AI agents, it’s more like a generative AI flow as part of a kind of traditional SaaS product. 

What I’m excited about is when we get to a point where we do have autonomous systems running recruitment, running your knowledge management, running your productivity, like personal assistants.

“We’re excited about AI agents that go beyond hype—autonomous systems that actually run recruitment, knowledge management, or productivity, with the right reasoning, guardrails, and up-to-date data.”

Romain Sestier, Co-Founder and CEO of StackOne

I think everyone has this idea of a personal assistant. When you think about role-based AI agents within an enterprise that have accurate actions that they’re taking within the tools, accurate knowledge and up-to-date knowledge of the systems that they’re interacting with, that have checks and balances and guardrails around the actions that they’re able to take, we’re a little ways away from that actually happening. Still, I’m very excited about that becoming a reality, probably in the next year or two.

How would you discuss that as the CEO of your company, as you learn to leverage it in your business?

Romain: First, on the ethical AI piece, it’s a significant concern to keep top of mind. We should not jump at the idea of AI automating so much that we’re careless about how it’s done. Regulations are necessary, and companies are thinking very carefully about this. There are a couple of companies I really like that are developing an offering around measuring the bias of the models against the training dataset, for example, or benchmarking it against a human, even a recruiter, for example. Recruitment is a perfect example where bias can come in very easily.

You realize that models are often less biased than humans by certain metrics. So that’s one thing: measuring it, improving it, having standards and regulations around it, and implementing AI ethically is important. There are other ways to involve a human in the loop, but I don’t think that’s always practical or scalable. So you need ways to build datasets to measure and mitigate bias in the models.

If you had to place one big bet on AI’s future, what would it be?

“The big bet is that AI will benefit incumbents more than startups—because they already own the data and control the interfaces.”

Romain Sestier, Co-Founder and CEO of StackOne

Romain: One big bet, which is maybe a bit controversial – I think AI will serve the incumbents better than the new entrants in many industries because the incumbents already have the data and can control the interfaces. The APIs, the MCP servers, design, et cetera, and how they expose that data to third parties, versus the new entrants who must rebuild that from scratch. And they have to build something extremely novel and new that doesn’t exist today to capture a new share of wallet. Whereas the incumbents can launch, as long as they’re moving quickly, they can launch many AI features on top of their existing competitive advantage, which is the source of truth and the data they hold.

So you mentioned DeepSeek. You guys leveraged that. That was somewhat controversial because it was from China, and for one, and two, it was open source.

Romain: First of all, around where DeepSeek came from – every model has guardrails within it. I’m not saying some are better than others or not. You have to be careful about which context you’re using it in. In our case, we’re writing integrations, which is one of the most – I can say this because I’m doing it – but it’s one of the most boring things you can ask an AI agent to do.

I think the bias here doesn’t really apply. We’re just looking for the model that has the best capabilities. Llama4 looks good. It has some of the best capabilities in writing code, at the end of the day, and context windows and all that stuff. So, for us, that wasn’t that much of a concern.

They introduced the concept of AI reasoning. What’s your thought on that? Does it apply to your business, or is it just more rationale for what’s happening?

Romain: A hundred percent. AI reasoning definitely applies to StackOne.The reasoning models emerged towards the end of last year. We built our StackOne agent, which is our internal agent for writing integrations, almost two years ago. We started working on it and realized that we needed some kind of chain of thought to get to a better-quality output. Sometimes, having an LLM as a judge, but even within the same model, having some kind of reasoning.

So we implemented something similar, which was initially extremely slow within StackOne. Then, when the reasoning models came out, we migrated towards those because they natively support reasoning. It was a huge leap forward to be able to handle complex tasks that initially required input from a lot of data. It’s able to process and refine it up to a point that it can make very good decisions, especially design decisions about APIs, for example.

What’s your AI strategy from a usage and adoption perspective across the company?

Romain: One of the guidelines that we give our employees for using AI is that we strongly encourage it. I think that everyone is new at it by definition because it is new. Everyone has to learn how to prompt. Everyone has to learn how AI reasons, which models work better for certain tasks, and you want everyone to try and experiment as much as possible and share knowledge with others.

So, we already have a huge drive and a kind of top-down push for everyone to utilize it as much as possible. When it comes to security, we’ve been doing that for a while. We are SOC2 certified.

Like a lot of software companies that go through this process, you have to implement a very strong policy around how you vet the tools that you’re using and the suppliers and vendors. By the way, one of the main selling points of GoSearch was the fact that it didn’t store the data, so we were able to do real-time searches within our tools.

The fact that it doesn’t create a replica of the underlying system is super important to us. That’s the kind of vetting that we do with all of our suppliers. We’re very attuned to this being an integration platform ourselves, but how do they build the integration? Do they replicate the data? Where does it get processed? And it sounds like it’s a privacy or legal concern, but it’s actually very real.

You should pay attention to where your data is being replicated and its supply chain. This is especially true for your organization’s most sensitive data, such as employee, customer, and business information.

Since you mentioned GoSearch… What makes an enterprise search differentiated, in your opinion?

Romain: This is where many differentiators can be built. The more data you connect to, the more complexity you bring to access controls. And making sure that you build your software from the ground up to be as zero-trust as possible. This means you limit the amount of data you replicate, access, or share. Building from that baseline helps versus building software that, by default, shares everything and having to restrict access. That is much more complicated and much more prone to errors.

From a buyer’s perspective, we are obsessed with that concept because that’s how we designed our software. Not storing any of our customers’ data as an integration platform is, I think, an important point and something that is going to become more and more part of the consciousness of the CISOs, CIOs, and the buyers who are purchasing the enterprise software tools that are accessing their data. I am already seeing increased sensitivity to data access than we did five or ten years ago.

How important are ZDR agreements for you when purchasing software? Is that something required of you by customers?

Romain: Both. One of the key things about our platform is that we don’t retain data; therefore, we enable our customers to sell to enterprise organizations that care about the data supply chain. So our customers ask for it.

We also look for it ourselves because we want to make sure that we control who has access to the data, how it’s being accessed, and to monitor it as much as possible.

How do we get our team to use AI tools to try and always stay ahead of the curve? 

Romain: I think there’s some risk there. Some say that low-level engineers are going to be replaced in the next year. So, how do you kind of uplift your skill set or your AI aptitude to stay ahead of that potential risk in that curve?

There’s a bit of a misconception about how quickly AI is being adopted by some enterprises today. And I think that’s because a lot of the focus is on AI models, proof of concepts, and some very small-scale experiments right now, really, and small-scale companies.

But really, that barrier to adoption is twofold. One is security accuracy, matching the requirements of risk-averse companies. 

The second thing is design. Paradoxically, I think one of the things outside of learning about prompts and models and integrations and all that stuff, actually learning about user design and user experience design is equally important. You can have the best models, but if it doesn’t fit within the workflow of your users, it’s not going to be adopted. And what many companies are getting wrong today is that they’re trying to invent a workflow that is entirely separate from anything else that their customers are using, any of the tools that their customers are using. 

What we’re seeing with our customers is that they are realizing the fastest growth adoption when they’re saying, you don’t even have to leave the existing tool you’re in, we plug into it, and we enrich and enhance what it does.

It doesn’t mean that you can’t log into a separate UI—that’s not the point—but it at least connects with your existing workflows. I think that’s how we try to stay ahead: continuously think about how something actually impacts a specific user workflow, rather than thinking of it in an abstract way of just staying ahead of the curve.

You mentioned that you are a GoLinks customer. I have to ask, what are your thoughts on the GoLinks technology, and how does your team leverage it today?

Romain: GoLinks has become an addictive tool. I used to work at Google, so I literally searched for whether this existed outside of Google. I initially used it just for myself when we founded the company, then I started creating GoLinks for literally everything and sharing them with the team. Now the vast majority of the team uses them. 

“GoLinks has become an addictive tool for us at StackOne. I’d definitely recommend GoLinks. In fact, I already have and will continue to do so.”

Romain Sestier, Co-Founder and CEO of StackOne

What do you like most about GoLinks?

What I like most about GoLinks is the ability to keep everyone focused on one thing. For example, Go/OKRs will always point to the latest OKR, so we don’t have to constantly share things or worry about versions.. That idea of versioning and changing where the pointer goes sounds silly, but it’s actually very crucial.

That’s one of the best selling points for me.

Last question- would you recommend GoLinks to other companies?

Oh, 100 percent. I have already, and I will again.

Conclusion: Connecting Innovation with Responsibility in AI

Romain Sestier’s insights thoroughly explain the transformative impact of AI on the enterprise software sector. At GoLinks, we embrace similar innovation principles through GoSearch, GoLinks, and GoProfiles—solutions designed to enhance productivity, security, and workflow efficiency.

We invite you to experience these advanced tools by scheduling a demo of GoSearch, our AI enterprise search and AI agent/actions solution today. Discover firsthand how AI can drive your business toward smarter, more secure operations.

Schedule a demo

Share this article
Discover how Large Action Models are transforming AI from passive thinking to real-world action, in this interview with Eightfold AI’s Ritendra Datta.

AI Innovators: Views on Large Action Models and the Future of Experiential AI with Ritendra Datta

Discover how Large Action Models are transforming AI from passive thinking to real-world action, with Eightfold AI’s Ritendra Datta.

AI Innovators: Founders’ Perspective on AI’s Role in Transforming the Enterprise

Explore how AI is transforming the enterprise with insights from Vidoso CEO Sharad Verma on video, search, and innovation.
Box vector large Box vector medium Box vector small

AI search and agents to automate your workflow

AI search and agents to automate your workflow

Explore our AI productivity suite