Summary
The purpose of business intelligence systems is to allow anyone in the business to access and decode data to help them make informed decisions. Unfortunately this often turns into an exercise in frustration for everyone involved due to complex workflows and hard-to-understand dashboards. The team at Zenlytic have leaned on the promise of large language models to build an AI agent that lets you converse with your data. In this episode they share their journey through the fast-moving landscape of generative AI and unpack the difference between an AI chatbot and an AI agent.
Announcements
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- This episode is supported by Code Comments, an original podcast from Red Hat. As someone who listens to the Data Engineering Podcast, you know that the road from tool selection to production readiness is anything but smooth or straight. In Code Comments, host Jamie Parker, Red Hatter and experienced engineer, shares the journey of technologists from across the industry and their hard-won lessons in implementing new technologies. I listened to the recent episode "Transforming Your Database" and appreciated the valuable advice on how to approach the selection and integration of new databases in applications and the impact on team dynamics. There are 3 seasons of great episodes and new ones landing everywhere you listen to podcasts. Search for "Code Commentst" in your podcast player or go to dataengineeringpodcast.com/codecomments today to subscribe. My thanks to the team at Code Comments for their support.
- Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
- Your host is Tobias Macey and today I'm interviewing Ryan Janssen and Paul Blankley about their experiences building AI powered agents for interacting with your data
Interview
- Introduction
- How did you get involved in data? In AI?
- Can you describe what Zenlytic is and the role that AI is playing in your platform?
- What have been the key stages in your AI journey?
- What are some of the dead ends that you ran into along the path to where you are today?
- What are some of the persistent challenges that you are facing?
- So tell us more about data agents. Firstly, what are data agents and why do you think they're important?
- How are data agents different from chatbots?
- Are data agents harder to build? How do you make them work in production?
- What other technical architectures have you had to develop to support the use of AI in Zenlytic?
- How have you approached the work of customer education as you introduce this functionality?
- What are some of the most interesting or erroneous misconceptions that you have heard about what the AI can and can't do?
- How have you balanced accuracy/trustworthiness with user experience and flexibility in the conversational AI, given the potential for these models to create erroneous responses?
- What are the most interesting, innovative, or unexpected ways that you have seen your AI agent used?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on building an AI agent for business intelligence?
- When is an AI agent the wrong choice?
- What do you have planned for the future of AI in the Zenlytic product?
Contact Info
Parting Question
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
- Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
Links
- Zenlytic
- Attention is all you need
- Transformers
- BERT
- The Bitter Lesson Richard Sutton
- PID Loops
- AutoGPT
- Devin.ai
- Google Gemini
- Anthropic Claude
- OpenAI Code Interpreter
- Edward Tufte
- Looker ActionHub
- OAuth
- GitHub Copilot
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Sponsored By:
- Starburst: ![Starburst Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/UpvN7wDT.png) This episode is brought to you by Starburst - an end-to-end data lakehouse platform for data engineers who are battling to build and scale high quality data pipelines on the data lake. Powered by Trino, the query engine Apache Iceberg was designed for, Starburst is an open platform with support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by the teams at Comcast and Doordash, Starburst delivers the adaptability and flexibility a lakehouse ecosystem promises, while providing a single point of access for your data and all your data governance allowing you to discover, transform, govern, and secure all in one place. Want to see Starburst in action? Try Starburst Galaxy today, the easiest and fastest way to get started using Trino, and get $500 of credits free. Go to [dataengineeringpodcast.com/starburst](https://www.dataengineeringpodcast.com/starburst)
- Red Hat Code Comments Podcast: ![Code Comments Podcast Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/A-ygm_NM.jpg) Putting new technology to use is an exciting prospect. But going from purchase to production isn’t always smooth—even when it’s something everyone is looking forward to. Code Comments covers the bumps, the hiccups, and the setbacks teams face when adjusting to new technology—and the triumphs they pull off once they really get going. Follow Code Comments [anywhere you listen to podcasts](https://link.chtbl.com/codecomments?sid=podcast.dataengineering).
Hello, and welcome to the data engineering podcast, the show about modern data management. This episode is supported by CodeCommence, an original podcast from Red Hat. As someone who listens to the data engineering podcast, you know that the road from tool selection to production readiness is anything but smooth or straight. In code comments, host Jamie Parker, Red Hatter and experienced engineer, shares the journey of technologists from across the industry and their hard won lessons in implementing new technologies. I listened to the recent episode, transforming your database, and appreciated the valuable advice on how to approach the selection and integration of new databases and applications into the impact on team dynamics.
There are 3 seasons of great episodes and new ones landing everywhere you listen to podcasts. Search for Code Comments in your podcast player or go to data engineering podcast dot com slash code comments today to subscribe. My thanks to the team at code comments for their support. Your host is Tobias Macy. And today, I'm interviewing Ryan Jansen and Paul Blankley about their experiences building AI powered data agents for interacting with your data at Zenlytics. So, Ryan, can you start by introducing yourself?
[00:01:15] Unknown:
Yeah. So I am the CEO of Zenlytics and 1 of the 2 cofounders, but Paul is really the brains behind the operation. So I'll let him, introduce himself better.
[00:01:24] Unknown:
Yep. Ryan's Ryan's Toucan. I'm Paul. I'm Ryan's cofounder and the CTO of Zen Linux. And, yeah, really excited to be chatting today.
[00:01:32] Unknown:
And going back to you, Ryan, do you remember how you first got started working in data and AI?
[00:01:37] Unknown:
So I've always been a big nerd, and I was an engineer at the start of my career. I actually became an investor for a long time. And, the, the fund that I was actually starting or looking at when I decided to leave the fund was at the time called a big data fund. And I was doing the research for this fund and I kind of saw how far the hardware had come since I was an engineer and the capabilities of data analytics. And I was just so amazed by not only how how far it had come, but also that it seemed to be accelerating. And I knew then that I couldn't just be an investor in this tech and I had to participate.
And that was years ago, but today, I guess we call that AI. And I've been pretty much heavily involved with data stack ever since. So I guess it's more of a just passion for the space than anything else.
[00:02:23] Unknown:
And, Paul, do you remember your introduction to data and AI?
[00:02:26] Unknown:
Oh, yeah. My, my first ever real job was I worked on Roche's math team. Roche is a big pharmaceutical company. I worked on the team that built the algorithms that run their blood glucose meters. And I remember, you know, the thing itself looks really simple, but it is incredible how many and the diversity of the different algorithms they have that actually need to work together to actually make that work. And I was like, this is incredible. It's like really good technology is indistinguishable from magic. And I was like, this stuff is amazing. And I wanted to go and learn more about it. So went off to grad school, and that's actually where Ryan and I met.
So we studied the we studied in grad school right when, transformers are first coming out. So it's in 2017 right when Google released the attention is all you need paper. And we actually had a a class where the professor was kinda like, hey. We're gonna talk about this other stuff, but this new NLP paper came out that kinda changes everything. So we're just gonna talk about that. And it was, it was a really cool time to be to be studying.
[00:03:27] Unknown:
Not not the robots in disguise transformers because that would have put you at the 19 eighties.
[00:03:32] Unknown:
Yep. That's true. I am I am I'm not quite that old. So it had to be the
[00:03:39] Unknown:
the Google BERT and GPT 3 precursor transformers. Oh 0 I'm old enough to remember the original transformers and the AI winner between the 2 as well. Absolutely.
[00:03:49] Unknown:
And gotta say, by the way, thanks for having us on again. This is, I think, the only podcast that Paul and I have ever been on twice, and it's it's such a pleasure to to to come back as well because this is, you know, 1 of our very favorite podcasts in the entire data space. So thanks again so much for hosting us. Absolutely. And thank you for coming back, and, I'm glad you've enjoyed it. And so to that note, I'll also mention, as you said, you've been on before, so I'll add a link in the show notes to your previous appearance so that we don't have to go all the way into the history of Zenlytic and what it is and how it works, and instead we can carry forward with what you're doing now. So for people who didn't listen to that yet or who haven't paused and gone back to that and then come back to here, could you just give a quick overview about what Zenlytics is and then move into the role that AI is playing in your platform as of today?
[00:04:34] Unknown:
Yeah. Absolutely. Zenlytics is a business intelligence platform that uses power of large language models to make self-service a reality. We like to say we're the the world's first self serve BI platform, which is a little bit cheeky because BI tools have been saying this for a really long time. We think this time it actually is different because, large language models provide an interface that can actually comprehend what people are saying and have the sort of back and forth you need to make that work. And we'll we'll talk about that a lot today as we get into sort of how we work with this data agent. But the core the core thing we do is we have all the normal BI features for the dashboarding, stuff like that. But the real exciting thing is that at the core of it, we have a data agent that's able to effectively use the whole tool for you. And kind of be an analyst that just helps you with all the questions you have about your data and helps steer you in the right direction even if you don't know where stuff is.
[00:05:27] Unknown:
The interesting differentiator there too, I think, is that you're starting with the conversational approach being the de facto mode of interacting with the platform, whereas there are a number of business intelligence platforms that are bolting AI onto the experience with a foundational capability of being able to interact with SQL engines, rate SQL, or in the case of Looker, rate something that's sort of like SQL and then generate SQL from that and have all these charts. And then from there, as an afterthought or maybe not afterthought, but as a second order concern, having AI provide this conversational interface either for just saying, what are the charts that I have or maybe even being able to generate charts? But I'm just curious what you see as the fundamental shift in AI being the first class concern of the platform as opposed to a bolt on after the fact in some of the ways that that transforms the overall end user experience?
[00:06:30] Unknown:
Yeah. Great question. That is absolutely correct, that we consider this to be the principal use case of the platform. And that extends to more than just the UI, though. So I think, it actually requires us to rethink a lot of things about how a BI platform is built all the way down to, you know, how the data model works, for instance, and how you think about you know, like you said, there's a there's a very principled discussion there around, you know, text to SQL versus sort of, you know, data model based LM modeling. And, there's been a lot of discussion around that. And, you know, we we fall very firmly in the latter camp. But, yeah, basically, I had to build this thing from the ground up with the intent of making it work for LLMs.
That doesn't mean that it's not a fully featured platform. We have all have all of the sort of best in class sort of business intelligence stuff in there. We built that out before we even built out the LN functionality fully. But all of all of those things have been built to integrate seamlessly with a data agent so that they can sort of feed into each other. And, you know, to give you an example of that, of course we have a great dashboarding experience, but those dashboards are degraded in each direction with the data agents. So in the dashboard to the dashboard to chat paradigm, you could actually go and sort of take any dashboard tile, you know, ask a question from here and be instantly interrogating that in chat. Or, you know, vice versa, the agent is able to search across your dashboards, actually create new dashboards, understands a lot about the existing data assets. So, I guess our philosophy there is that you have to have a tight coupling of all of the various ways that 1 would wanna use the tool, and they'd have to all agree. And that's why we think it makes sense to have this in 1 platform. If this is sort of fragmented across multiple tools, we all know the challenges with getting those tools to agree on the single source truth of what the data actually is.
We, all know that a self serve user needs, you know, trading, doesn't wanna log into multiple platforms, doesn't wanna learn multiple platforms. So there's there's lots of reasons why it makes sense to unify this into 1 experience. But, yes, absolutely. That whole experience was sort of built with LLMs in mind. And I think the extension of that is that we're we're not afraid to look different from a conventional BI platform. And, you know, something we believe is that probably things are gonna look differently, you know, say, 2, 3 years from now, in terms of how you interact with your data than they do today. And we expect a lot of innovations in a lot of different software verticals because of AI. But, I think in particular, we're willing to go and, you know, make this tool look different to make sure that it delivers an elegant intuitive experience for the self serve user.
[00:09:01] Unknown:
The 1 thing 1 thing I'd I'd add to that too is that a big distinction that we think about is that all the kinda incumbents, like you said, will be bolting on AI stuff. And they'll they'll be kinda like helpful copilots. They'll help you, you know, change colors online charts. Those are all great and helpful features. The difference is that we are really building a data agent. Something that's not just gonna be a copilot, but more like a coworker. You can delegate a task, tell it to go build a dashboard for you, tell it you need to allocate some more budget and you're not sure where to start and have it sort of coach you through it. This is what we've got. This is what we you know, this is what you can look at. So it's really about that coworker versus copilot, especially on the long term long term view of things.
[00:09:39] Unknown:
Yeah. And I definitely wanna dig more into that distinction of data agent versus chatbot. But before we go too far down that path, I'm interested in an overview of what you see as being the key stages and transitional events in your overall AI journey and some of the ways that the drastic and fast moving change in the industry has influenced your overall trajectory?
[00:10:06] Unknown:
That that is a great question. And I think, you know, the the drastic and fast moving changes mean that you have to be really, really dynamic when building these things. And our architecture as a result has changed pretty dramatically since we we first built it. You know, as the capabilities of the models get higher, the things that are fundamentally possible with them just just change. And you need to, like, change the architecture and the tools you build around the models as a result. So we've had, you know, already, I think, 3 distinct times where we've had to basically, like, rebuild it kinda.
Yeah. Definitely very fast moving. And it's really important to stay up to date too because it moves so fast. If you're not staying up to date with it, you're you're gonna fall behind quickly. It's it's been a really interesting journey for us because when we started building this, it was before,
[00:10:50] Unknown:
you know, GPT 3.5 existed or whatever sort of a modern large language model is. But at the time we were using like the early, early versions of that. So like, you know, open source models like Google's BERT, for instance, and this is the land before time, but they had some basic language understanding capabilities. And I'd say a general sort of, you know, progression for us is that those early, early models had we had to put a lot of guardrails in. We had to really limit what the models are capable of doing, basically. And then over time, as the models became more performant, we could take some of those guardrails off, and that allowed us to be more flexible, more powerful. We are constantly rethinking the architecture as we were doing that to make sure that it worked worked really, really well. But, it's it's a little bit like the, there's a famous paper called the bitter lesson by Richard Sutton that talks about how, you know, it's it's better to let machine learning models do the flexible stuff, basically, instead of trying to hard code things around it. That can give you, like, a short term burst, but, like, long run, it's like the models.
The performance of the model will be dictated by the growth of the model capabilities, I guess. And then it's kind of interesting because it's like along that progression, you also have to ask yourself what's not going to change, you know, what's going to stay the same. And 1 good example of that is what I mentioned earlier, which is you know, using text to SQL versus using sort of some sort of data model powered, paradigm for, for the LMS. And, you know, we have a strong belief that that is not gonna change for the for the for the foreseeable future. I don't think text to SQL is going to get bitter lessened. I think that you can debate about the accuracy for instance, and maybe we could see models getting good enough in the future that, you know, very, very accurate text to SQL is possible.
But even if that's the case, there's there's still other issues around it. I mean, 1st and foremost, you know, data governance and security is really, really difficult with those sorts of models. Secondly, it's it's really difficult to give those models enough context to really answer the question. So, you know, if you let's let's assume we can make a text to SQL algorithm that's superhuman. Let's let's say it's as as good of a SQL writer as Paul is. And, you know, if you were to take Paul and drop him into a random data warehouse and say, tell us our conversion rate, without any context, he'd probably get it wrong too. Right? So, you know, you can eventually start building in context layers and things like that, but that eventually starts to look a lot like some sort of data model anyways. So, it's it's been interesting to sort of, you know, really pay attention to we have to work at a fast moving environment, but we're also paying attention to the things that won't change.
[00:13:12] Unknown:
Absolutely. That that's definitely 1 of the lessons that keeps getting reinforced throughout my career and in conversation a lot of other people is that if you focus on the fundamentals, that will carry you much farther than if you try to chase after the new shiny thing because you're never going to understand how it works, why it works the way it does, how to apply it properly if you don't have that foundational knowledge of the core computer science principles, systems design, systems architecture.
[00:13:42] Unknown:
Yeah. Totally.
[00:13:43] Unknown:
And in your Mad Dash journey through the land of AI, I'm curious, what are some of the dead ends that you've run into and some of the challenges that still persist now as you have gone through these generational shifts in the language model capabilities?
[00:13:59] Unknown:
So, I mean, there is there there's many. So I'll just give a little a little sampling. 1 that I think is, is a common 1 that we sort of had to really learn from is that whenever you try to nest the large language models. So it's like you have you know, we have an agent, which basically means there's 1 at the top that that sort of makes decisions and use these different tools. Whenever you have it, use a tool that also has a large language model in it. It pretty much never works. It always seems like a good idea, but really, there's, like, just fundamental differences in context of what the the kind of sub agent knows. And that it just kinda doesn't work really as a result. It just doesn't feel good as a human, using it. So that's that's definitely 1 of them. And there there are many. The other thing that I think is talked about a lot more is, a lot of these systems have a fundamental search problem.
I think we do a pretty good job at that now, but that's 1 of the areas that, you know, it seems like it's just straightforward. You're gonna ask an AI and it knows stuff. And it's like, well, how does it know stuff? It has to search some very large corp of information to be able to know the right context at the right time. And that itself is also a hard problem that you have to solve when we're building these systems. So that's 1 that if someone, you know, is thinking about building a system like that, think long and hard about how you're actually gonna do the search piece of it because that part is really important too.
[00:15:22] Unknown:
Yeah. And more generally, think long and hard about how you will help the model deal with ambiguity. I'd say a lot of the things in the in the 1 of the big jumps from demo land to the real world is that literally a 100% of the people that are using this sort of technology in the real world, they have lots of ambiguous, slightly overlapping definitions in their data. And sometimes it's hard for a human to navigate that as well, but that's something that actually is is very difficult for a LLM to do. Other other interesting 1, which which comes up over and over again in many domains, is that LLMs have not yet gotten great at long term planning.
So there's there's a fundamental problem with LLMs that, you know, they just kind of add the next token based on the previous tokens. And as those sort of, you know, keep going forward, tiny errors in those little token choices can compound over time to lead to a very wide range of results if you're doing something, if you're looking for the far ahead in the future, you know, if you're ready something with lots of tokens, which means looking ahead far to the future means writing lots of tokens. You can get a lot of that drift happening that can lead to undesirable outputs, basically. So it's hard to make them look really, really far ahead.
That's actually 1 of the ways where data agents can come in and help self correct that.
[00:16:33] Unknown:
Digging into that data agent concept, I'm curious if you can provide a little bit more color as to what that term signifies, both from a semantic perspective, but also from the technological requirements to be able to build and support that type of functionality, particularly in comparison with the chatbot interfaces that people have grown to be very accustomed to in the past couple years?
[00:17:00] Unknown:
Yeah. For sure. The the big difference, though, with the way I see it, is that a a a chatbot is tokens in, tokens out. It's it's kind of an open loop system where, you know, you specify an input. It gives you an output. That's what leads to those sort of long term planning issues, for instance. What a chatbot does differently, is that it's actually usually an architecture of a number of LLM calls that's chained together in a closed loop way, is how I'd fundamentally describe that. It's a little bit like so, you know, like I said, before I was an investor, I was an engineer, and I have a double e by education, electrical engineer. And when I was studying double E, everyone is crazy about these things called PID loops. I don't know if y'all heard of PID loops before PID loops are the things that keep air air airplane autopilot, like, stable, or they're the things that keep thermostats that that make that make thermostats work. It's just 1 little closed loop feedback.
And it's a very simple thing. You know, when I was going to school in, like, the early 2000, this was a, you know, kind of a new thing, but they're very, very powerful. They they unlock a lot of what was kind of, like, almost like AI at the time. What does that mean for for for, you know, data agents and for, you know, LMs in general? What it means is that, what the LM will do is it will actually build a plan at the start, and it will actually execute those step by step. And every step along the way, it will pause and reflect and say, okay. Wait. Is this did I did I achieve this step well? Do I need to change the plan at all? Do I need to go back and revisit earlier steps? And it'll sequentially go the way through that. And once it's assessed that it's actually completed the task, it will again sort of look at the whole chain of tasks in a way and say, alright. Did I achieve what I set out to do in the first place? And if it's no, then they can go back and iterate, for instance. You know, that that basic architecture allows for lots of other really interesting things. You know, it allows for for tool usage, for instance. So generally, data agents are given sort of a toolkit of things that they can do in their universe that provides them feedback.
And as they progress through these steps, they choose the most appropriate tool for the job, and that tool could be anything. So, like, in in the context of Zoe, our analyst, that tool, you know, writing a, semantic query could be a tool, or clarifying an ambiguous question could be a tool. You know, in that in this paradigm, asking someone for help is actually a tool for the agent. And the agent will actually go through, build a plan, use those tools, and iterate on that feedback in a closed loop way, which keeps it on track and consistent all the way through. Is there anything else you would add there, Paul?
[00:19:22] Unknown:
Yeah. I think I think the the sort of TLDR is that, you know, agents plan ahead, all chatbots react, agents use tools, chatbots are just, like Ryan said, tokens and tokens out. Just write text. And agents can iterate, which is really important, especially in data because if you've ever answered a lot of these questions, you'll go make an assumption about how something works, run a query, get that thing back, and be like, okay. Well, I guess I didn't understand how that thing works. Make a change, try again, and then you sort of iterate yourself to finding the actual solution. Agents are able to do that as well, where a chatbot would just create the query and be like, here you go.
So the agent harness gives you a lot of abilities for it to feel more like talking to a person because it's able to correct its mistakes just like a person is.
[00:20:07] Unknown:
And the neat thing is that agents are kind of the the next big thing. I think the next the next big step in in LLM research. And the history of it is actually right when some of the really great models first started coming out last year. People experimented and were really excited about agents. And you might have heard of auto gpt or baby AGI, and these are some of the very first sort of agents. And those, you know, kind of like they, they couldn't make them go all the way basically. And they kind of fell out, you know, out of like, out of hype or whatever. And people kind of moved on and they started experimenting with other things. And there's kind of like a mini agent winter through, through most of last year. We, you know, we kinda stuck at it. I think we might have an easier problem than a generalized agent because we have a narrow domain problem. But we just kept using architecture like that and plugging away, and it works works well for us. But then sort of sort of interestingly, to our surprise, like, their main surprise, these, agentic architectures started becoming more popular and more performant again even over the last few months. And, you know, a good example of that would have been devon.ai as a as a software writing agent, for instance.
And and, you know, I think now we've reached a point where people have proven that these agents actually outperform basic chatbots. So, like, a GPT 3.5 with an agent architecture can generally outperform GPT 4 on most benchmarks with vanilla tokens in tokens out. You know, Andrew Ng says that, like, you could see GPT 5 level quality performance today. It's just GPT 4 with an agent. He's kind of extending that analogy. So I I think the world has kind of really wakened up to the fact that this is the the right way to work with these tools, and I suspect that we'll be seeing a lot more deployment of that going forward. I know if I was starting some sort of AI powered software tool right now, you know, I would whatever it would be, I absolutely would be using an agentic architecture for it.
[00:21:53] Unknown:
Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end to end data lake has platform built on Trino, the query engine Apache Iceberg was designed for. Starburst has complete support for all table formats, including Apache Iceberg, Hive, and Delta Lake. And Starburst is trusted by teams of all sizes, including Comcast and DoorDash. Want to see Starburst in action? Go to data engineering podcast.com/starburst today and get $500 in credits to try Starburst Galaxy, the easiest and fastest way to get started using Trino.
So given the requirements of being able to have the AI model have this feedback loop, understand the contextual elements of the conversation that it is engaged in, understanding the goals of the task that it's has been assigned. I'm wondering if you can talk to some of the supporting infrastructure that's necessary to be able to allow these very flighty, if you will, models to have the appropriate context and the appropriate semantic reasoning to be able to carry on these longer interchanges and actually achieve the stated goal?
[00:23:11] Unknown:
Yeah. That's a that's a great question. There's a lot of different components to that architecture as well. So it's like it's like the search component to be able to give it the right context at the right time. It's curating the tools very well. So those tools are like atomic and they're, you know, it's clear to it what they're gonna do every time. And then 1 of the really important aspects is the feedback loop itself. It's like how do you give it feedback when it's made a mistake in a way that's really easy for it to understand and keep it on on mission kind of. That's a harness that we've had to develop internally.
And has has been where we've spent a lot of time to really be able to increase performance in the in the agent is how we've been able to have that that feedback step and let it sort of self correct.
[00:23:58] Unknown:
1 thing that's become interest increasingly clear over the past year, a year ago, I think everyone was saying who can build the best model. And there is there's a wide range of, sort of quality of models as well. You know, if you had OpenAI lead in the pack and then there's a number of various levels of quality beneath that. Fast forward to today, and we have a number of different models that are all are kind of comparable in terms of their capabilities. And, you know, if you if you take it off the shelf model from OpenAI or Gemini or, you know, Claude or you'd they'll they they all have really high quality models that perform more or less the same. And I think people are realizing that the the current architecture is kind of reaching the top of the s curve.
The compute itself is somewhat commoditized, and people are starting to realize that the the really standout performance from LLMs is gonna come from from 2 things, really. Well, first, from the right sort of harnesses that can use and interact with these models. And secondly, is the right sort of interfaces to to make it elegant and easy to use. So it's really the application layer is where the innovation is happening now.
[00:25:03] Unknown:
Yeah. And 1 1 thing I'd I'd add to that too is that you'll you kinda know if you have a good architecture if every time, you know, OpenAI or Anthropic releases something, you're thinking, oh, yeah. If they make it better, this is gonna be a huge boost. If you're worried about their model improvements killing your business, then, you probably don't have the right, like, structure around it.
[00:25:25] Unknown:
The other element of AI and data agents that you mentioned earlier is the concept of the appropriate data model and domain model so that the agent has the appropriate search space for being able to appropriately understand the nature of the request and in the business intelligence space, in particular, understand the semantics of the data that it has been tasked with querying. 1 of the primary interfaces for providing context to models has been the outgrowth of retrieval augmented generation where you have to generate these embeddings for being able to put this into a vector search space. You don't necessarily want to have to embed all of your data because that can get very expensive. And I'm wondering if you can just talk to some of the ways that you're thinking about the types of modeling and how to provide that context as well as how to reduce the burden on end users for being able to feed the appropriate context to the agent to be able to get the appropriate requests back.
[00:26:31] Unknown:
Yeah. That's a that's a great question. The way we handle it now is that the semantics are defined, along with the actual, like, definitions of the metrics. So you can you can add, you know, context on how to use a a table or a view. You can add context on when to use this revenue metric instead of that 1. And all of that gets, shown to the model. And then the model is able to make pretty sophisticated decisions on, you know, when to choose 1 versus the other because it has that context that's internal to your business. So when you when you ask it a question, it actually sort of knows the business context around what the metrics mean, what the tables mean, and how to use them. So and then actually serving that context to the model involves, not just the vectors but also, you know, like sort of traditional, like, keyword search as well. A hybrid model we found works works best.
[00:27:22] Unknown:
All of this functionality is wrapped up together in in Zenylytics cognitive layer. And what what the cognitive layer is is an evolution of the semantic layer and, I guess, recently popularized by Looker, but it's been around for 30 years or something like that. And what the cognitive layer does is it takes all of that semantic layer functionality where you can encode, you know, metrics in in a sort of YAML SQL kind of way. You know, it takes all of that functionality, but then it also adds in all of this necessary agentic lm harness, all these things we've been talking about. So adding in those search capabilities, adding in, special ways to have feedback loops, both, you know, hidden feedback loops as well as publicly with the end user to make a data agent paradigm really, really effective.
And those things all get combined together in a really elegant way inside the cognitive layer. And that's also where the context is captured and that's you know, it understands the nature of the various metrics you're using and what sort of dimensions you slice them by and, all the business context is encoded inside that layer as well.
[00:28:22] Unknown:
What are the other interesting aspects of the current state of AI in this point in time, both in industry and in the kind of society at large, is that there's a high degree of variability in terms of the either levels of skepticism among some people or enthusiasm and misplaced, attributions of skill on the other end of the spectrum. And I'm curious what you have seen as some of the aspects of customer education that have been necessary to help them understand the realities of what AI can and can't do and some of the skepticism that maybe they do need to bring to bear in those interactions.
[00:29:00] Unknown:
Yeah. Definitely. I think there's there's plenty of examples in both directions. I think a good a good way to start is that there are some people who have just unbelievably high expectations for what it's able to do. You know, they'll come in and they'll say, like, hey. So, you know, show me my revenue last week and compare that to the average of all my competitors. And you're like, we don't we we can't go and get your competitors' revenue information. Kinda you know, nobody could really do that. But it's like, you know, some people come in with, like, oh, it's gonna magically find my competitors revenue for me. So it's like you definitely have to caveat kinda, you know, this is just the data that you have. If you don't have this data, it's gonna tell you, hey. You know, we don't have this data.
And then there's also, like, the the product side of it. Like, how can we sort of do a better job, not just us whenever we onboard people to temper their expectations, but also make the product interpretable. So it's like the first thing is just like a human, it's not gonna be perfect. There's gonna be times where, you know, you ask for the last 3 months. It understands that as like the rolling last 3 months instead of like the last 3 full complete months. There's a lot of sort of situations where it might not do exactly what you think it's gonna do. And that's why the cognitive layer gives us the ability to show you graphically like in the UI. You know, this is the date range it chose. And since those are all inputs and then the SQL gets compiled, it can never lie about that. So you can all you always have this ability to audit it without having to look at SQL, which means that, you know, it's gonna you ask it to go pull revenue. You'll see it pulls net instead of gross, and you can swap those if you wanna swap them. Because it will, you know, clarify, make some assumptions.
And having that interpretability piece is really important in terms of being able to close the close the the gap on between what you asked for and what it kinda understood.
[00:30:49] Unknown:
Yeah. Paul's Paul's, Paul's keyword there is audit, which I think is really important because I think a lot of people, even building LL applications, have not fully internalized this yet, which is the the LLM needs to give you feedback at a level that you can understand. And I think that's where a lot of these sort of, you know, divergences or of expectations come from is people think the LN will do everything for them. I like to think of it more as like a supervisory kind of, you know, you're, you're the supervisor of the LN and you ask it for things that it gives you an output you can review and, you know, just keep iterating from there. And a great example, if you take, you know, take OpenAI code interpreter, that's that's, you know, that's a notebook. It's generating Python. And people will go on there and they'll ask questions. It'll write, you know, pretty good Python for them, but that Python still makes mistakes. And if you're not a great Python developer yourself, it's really hard for you to catch where it goes wrong in sometimes very dangerous ways.
So, you know, a big part of the way that we approach this is that that level of feedback for the self serve user is in a language they can understand. It's speaking, you know, the metrics and slices and filters in a very understandable way, the same way that they're used to in a BI platform. And then other side I'd say is true. We get the skeptics and we get the overenthusiastic people, but it's actually funny. I I think I think it's easy to deal with the skeptics because, we get about a product demo, and I think the product demo speaks for itself, frankly. I think that people see the capabilities and they realize that it's not all smoke and mirrors at that point.
[00:32:15] Unknown:
In your space of business intelligence where the information that you're giving back to people is likely to lead to some sort of decision, whether that's in terms of road map or a product purchase or the ways that you're thinking about allocation of revenue or income and also the fact that AIs can often get things wrong. You pointed out that it only has your data to work from, so there's a certain amount of guardrails built in automatically. But as everyone who has been in this industry long enough knows, it's very easy to lie with data. And so I'm curious how you've approached some of that guidance in decision making as well as providing appropriate caveats to the end user to say, this is what we've come back with. This is how we think this is interpretable, but maybe don't go ahead and write out that check quite yet.
[00:33:05] Unknown:
Yeah. A big a big piece of that is is the the part kind of right when it's about to answer your question, where it's gonna kinda just like a like a really good human does is it it tells you what how like, what it understood you is saying and what it's gonna do as a result. That part right there solves a lot of the problems because a lot of the times, you know, a good human will do that and then the person will stop them right there and say, oh, wait a minute. Nope. I meant this thing instead of that thing. Just that explaining part, really helps kind of make sure things are on track and you're, you know, in the same kind of head space or universe as the person asking the question. Then the other 1 really is leaning on the that interpretability piece. Being able to just hover over and see, okay, it picked, you know, this metric from here. Because 1 of the advantages we have is since we have the full, you know, BI platform available, people see stuff on dashboards. They're like, oh, yeah. I know. You know, I see that metric on all my kinda main dashboards.
That's that's the 1 I was looking for. And they are able to kinda understand how what they ask for relates to what they see on a day to day basis on the on the dashboards.
[00:34:04] Unknown:
Yeah. Even an extension of that is, the chatbot will even suggest those dashboards if there are prebuilt data assets available. You know, you have to start asking questions, and they'll be like, you know, I can answer this, but do you want to look at the dashboard that your team has already built? So it actually promotes the human activity over the the chatbot answer first. And, I think generally speaking, this this is 1 of those things where that application layer comes into play, and a lot of the solutions to these problems are not gonna be building better models. They're gonna be building better interfaces.
[00:34:32] Unknown:
For that UI element, another challenge in the business intelligence space is understanding what is an appropriate visualization to use for a particular set of data and the axes along which it is being represented. And I'm curious what you see as maybe some of the potential of multimodality in the AI models to be able to address some of that challenge of what visualization do I want to present, and what is that actually going to convey to the person or how confusing is this going to be because there are 10, 000 colors.
[00:35:04] Unknown:
Oh, totally. That's why we have the system right now I think works generally pretty well is we have sort of like a deterministic, hey, you know, it's this like, these dimensions with these category with this many categories, this is like, you know, the best way to to visualize it. That's like the first step always. And then the person can ask for whatever they want, and they can ask for whatever they want. If they wanna make the entire thing just shades of green, they can do that, and it will do it will do it for them. So it it definitely can go off the rails in terms of you ask for a visualization that make sense. You know, it will it will get it for you. But we we have a a pretty good setup for giving, like it will default pick a visualization that makes sense for you. And then if you want to go from there, you can go wherever you want to go from there.
[00:35:50] Unknown:
Yeah. I, and the other philosophy here takes a page fit at Tuftis book. And I think that it, in a word, I'd say we just try to be boring. It doesn't lead to the, you know, the craziest visualizations or anything like that. You're going to see a lot of bar plots, a lot of line plots, a lot of basic tables, but it it's the, at the end of the day, it kills me inside, you know, like I love, I love sexy database, but, at the end of the day, it conveys the information they're looking for most effectively.
[00:36:16] Unknown:
And in your experience of building Zenlytic and continually investing in the AI capabilities and this data agent functionality, what are some of the most interesting or innovative or unexpected ways that you've seen that feature set used?
[00:36:31] Unknown:
That's 1 of the coolest things about building stuff with with AI is that you see it used in ways that you really, really did not expect. Like, we had 1 customer use it, and they were like, okay. You know, show me the top 10 accounts by, like, you know, most amount that they owe us in the last 3 months or something like that. Wanna see all the uninvoiced accounts. And we're like, okay. That makes sense. And then they were like, draft emails using the invoice details of all of these accounts asking for them to pay the money and don't be, you know, too aggressive, but, you know, be firm. And then it just, like, went through and got all the, you know, invoice details for everything and, like, drafted the emails for them. And I was like, that's that's pretty crazy.
I did not expect that to be how people would use it. So we see we see a lot around the unstructured text data.
[00:37:18] Unknown:
Yeah. That's the other fun thing too is, be it what I would call the the journey into semi structured semi structured data. And we see people actually kind of like work arounding this right now, which is interesting, where they will take, you know, textual data. They'll take news articles. They'll take, you know, white papers or whatever, and they'll store them inside of a data warehouse. So they can use the chat bot to go and not only access that, but also manipulate the text in 1 step. And I think it's cool that we're actually starting to see the lines blurring a little bit. You know, that's a good example of semi structured. You can go all the way to full unstructured. I think 1 of the weird things about working in our industry is that the nature of data has sort of changed.
And like 2 years ago, data would have been a table, you know, or, would have been a spreadsheet or something like that. And, and now data can be a PDF or a set of meeting notes or a YouTube video. Those are all data now. So we're really excited about what the feature is gonna bring, and we're fully prepared to, you know, embrace that future of the broader definition of data. Let me put it that way.
[00:38:22] Unknown:
In that line of your example, Paul, of being able to draft those emails, it also brought to mind the earlier notes of what differentiates a data agent from a chatbot is the fact that it has access to tools and understands how to use them. I'm I'm wondering what you see as some of the future evolution of the tools that are in the toolbox for that agent and some of the ways that you're thinking about the interfaces that you might expose so that other people can plug new tools in that you didn't think of yet?
[00:38:52] Unknown:
Yeah. That's a that that is a great question. I think this is something that all BI tools have tried to do at some point and never really successfully. Like, I don't know if you remember Looker's action hub. Good idea. It's just hard to make that actually work in practice. I think it's it's finally possible to make those things work with large language models. We we don't, like you know, I don't have the perfect idea of exactly how that's gonna look. But, you know, I can definitely see a world where you're like, hey. You know, how should I be adjusting spend for this campaign? And it's like, well, this is how it's done. This is how other campaigns have done. I'd recommend, you know, adjusting it up a little bit because it's doing well. You'd say, great. Go do that for me. A lot of screen pops up. You're like, yep. Log in Facebook. There you go. It took the action for me. I could definitely see a world where that's where that's the case. And it's limited not by its integrations with things as so much as it's limited by, like, what OAuth do you have for your internal tools.
And that's possible. Like, I mean, what I described is is really complicated to to create. Right? But it's actually possible now for the first time with large language models. And I think as they get better and more consistent and people, you know, use chattypt, ask it to browse the web for them, and just kinda trust that it did a reasonably good job of summarizing that, you know, web page. As as trust builds to any underlying models, then people will be more comfortable saying, yeah, go and adjust the the spend for this campaign for me.
[00:40:19] Unknown:
A great example of that actually is, you know, it's it's this policy is challenging, but, like, versions of that exist today. Right? And if you look at how, OpenAI characterizes GPTs, the GPT store, and their earlier API for it, you actually that that API was expressed in natural language. So you wouldn't actually go and say, Hey, make a token request to HubSpot to pull XYZ. In, in a JSON, you just say, get our sales from HubSpot. And that would actually use the necessary API docs in HubSpot to go and connect and make that request, even though it was characterized in natural language.
So, yeah, I think 1 of the big bottlenecks to those, you know, application stores really blossoming is that those connections are hard to build and maintain. So if we can make that more elegantly, it opens to a whole new world of possibilities.
[00:41:06] Unknown:
And so in your experience of building Zenlytic, working very closely in this AI ecosystem, what are some of the most interesting or unexpected or challenging lessons that you've each learned in the process?
[00:41:19] Unknown:
I think I think 1 of the big ones is it's like just how many mistakes you have to make to, like, figure it out. All this stuff is is just bleeding edge. Like, you're kind of off the map for, you know, how do you evaluate these systems? Because they do different things every time, basically. How do you make them consistent? Because, again, they do different things. You know? They're nondeterministic. There's a lot of stuff that we've had to build and figure out internally. And part of that too is that 1 of the big differences is that before, you know, if you you show a demo of something, it means you can pretty much do it at scale. You just pay Amazon more money. But that kind of turns on its head with with AI. Everyone can kinda show the same demo, but the real question is can you actually do it for a real a real data warehouse or, you know, whatever your actual problem domain is. Being able to go from showing a good demo to actually working well on real data warehouses for production use cases is it's a really big jump. And we've had to do a lot of engineering to be able to to get there. So definitely definitely surprising the difference between how easy it is to do a demo versus how hard it is to make this work really in production.
[00:42:31] Unknown:
Yeah. Yeah. The tech is deceptively hard. It's it's the kind of thing, I mean, it's it's funny Paul talks about, hey, you you you know, we're off the map here. And I remember my journey was, like, at the start of of all this amazingly fast developments happening in in AI. First, you're excited, and you're like, this is great. We're off the map. This is all brand new technology. And then very quickly realize, like, this is terrible. We're off the map. Like, there's there's really no known paradigm for working with this tech. And AI is especially, dangerous for that because it, it makes it look like it could be easy.
And I think there's a really common misconception with a lot of people that you just kinda, you know, throw an API in somewhere and you're you're calling out loud and suddenly that just sprinkles little star emojis over the whole product, basically. But this is a fundamentally new way of interacting with computers, and there's there's no rule book for it. There's no playbook. And, yeah, you really have to build everything from from the ground up. You have to build tooling from the ground up. You have to build methodologies from the ground up. And I I think people don't realize how difficult, of a problem space it is. And what I mean by that is you you have when when you add something like, you know, textual inputs to a product, you add in an unconstrained input space.
And, you know, in our case, in many cases, there's also generally an unconstrained output space. And in our case, there's a huge sort of range of different types of data in the middle of those 2, you know, from input to output. And when you're, when you're building a space like that, it's it's it's actually very hard to build a product that can capture those unconstrained but, like, like, capture all of that at 1 spot. So, I would say that it has, been pretty awesome learning experience, and I feel like we've we've had to, you know, be very, very creative along the way to implement a lot of the stuff that we've done. But, it's, been a lot of fun too. Yeah. Really been a rewarding experience
[00:44:27] Unknown:
too. And not to mention all of the time you've had to spend hunting down GPUs so that you can actually run the inference.
[00:44:34] Unknown:
Well, they it might be LPU soon. We'll see we'll see how well Grok does.
[00:44:40] Unknown:
Yeah. That that that, I I guess, is another point of curiosity is what you've seen as the scalability in terms of cost for actually being able to run this inference, particularly given that there's going to be highly bursty workloads given the nature of your end user experience where maybe somebody's having an hour long conversation with their chatbot and somebody else just has a single 1 off back and forth. And just the the way that you think about the overall cost of executing a model for inference and how to reduce the overall load on the system so that you're offloading that work as quickly as possible to some of the lower resource models?
[00:45:21] Unknown:
Yeah. That's a that that's a really good question. I think from from our perspective, it definitely matters, but it matters less. Because we are we are a b to b, you know, product, which means that we also charge a lot more money. You know, we don't we were not making, like, 15¢ on each user. And what that means is that we have more budget to make the tool work really, really well. So we pretty much universally push for, like, the highest comprehension, you know, like, biggest and best model in as many use cases as we can because, you know, it's got like, it's gotta work flawlessly, and people are willing to pay for it. And, you know, if they if they will genuinely have a really good experience with it. So we definitely index more on, like, maximize experience.
Not at all costs, but,
[00:46:06] Unknown:
but even if costs are high. It's better to have costs high and a great experience than costs lower and a meh experience. Yeah. High highest quality model and just large numbers of tokens. So I mean, AgenTek approach is take more tokens. And like in in our case, for example, before Zoe, the chatbot, ever replies to your question, she's already had extensive conversations with the cognitive layer back and forth under the hood and then goes to have the conversation. So, you know, we, we definitely take the gold plated approach, but I'm I'm I'll I'm not that worried about it. Not only because, you know, we're we're a a SaaS product, but also because those costs have been decreasing dramatically. Right? And, even even we just saw GPT 4.0 get released and that slashed the the cost of high performance models in half. And, you know, we see an order of magnitude change every 12 months or something, and it just keeps getting cheaper and cheaper. And, you know, I think that it's gonna become essentially free be before too long.
It reminds me a lot of, I compare I always compare the AI era to the mobile era because I, that's like the last big wave that I can think of. But it reminds me a lot of at the start of the mobile era, they had this kind of, you've probably forgotten, but they had this optimized kind of internet, right? Where there's like, you know, low, low, low mobile images and And we don't even think about that. But, you know, I think we'll probably see the same transition happen with LLMs as well.
[00:47:31] Unknown:
Yep. No. I I I very much remember mobile web and reading books about how to write your HTML so that it will render on a mobile phone, and then HTML 5 was the new amazing thing, and now it's just background noise.
[00:47:42] Unknown:
Yeah. Totally. A 100%.
[00:47:46] Unknown:
And so for people who are building systems and products, what are the cases where an AI agent is the wrong choice and it's just too much time and energy invested for not enough return?
[00:47:58] Unknown:
I think it's it's not just that you the return might not be high. It's that it might just not make sense in your application. Like, I think a good example is a tool I use every single day, which is GitHub Copilot. It's amazing. You just, you know, type a few lines. It offers to complete them for you. It's great. It's not a it's not an agent and nor should it be. It's very helpful as just like get some general context about where you are, what code you've been looking at, sees the rest of your file, offers you a very reasonable completion. That's a great product that should not be an AI agent. There's plenty of other products out there like that. So I would say it's really about what what kind of products you're building and building a good experience for the user.
[00:48:37] Unknown:
And as you continue to build and evolve your product and your platform and keep pace with the AI ecosystem, what are some of the things you have planned for the near to medium term or any particular projects or problem areas you're excited to explore?
[00:48:52] Unknown:
I think the the biggest 1 for us is, we've seen, like, we talked about earlier, a lot of people using unstructured data, whether they're sticking news articles in Snowflake, you know, or like reading invoice details and having a draft emails. We've seen a lot of people ask about more textual data. So I think a big thing that we're thinking about and we're gonna be, you know, improving a lot is the product experience working on text data. Because that's 1 thing that, like Ryan said, it's it changed the definition of data. More people are expecting it to be able to not just sum up some values in a transaction stable, but also to actually understand, like, documents and and what's going on in their in their business. Yeah. Plus plus 1 to that. And I would say maybe this is more of a this is a very near term future plan for us because it's in the product today. But 1 thing that I'm personally very excited about is
[00:49:42] Unknown:
adding in advanced analytics capabilities to Zenlytics. So, you you know, that's those are new tools for the for the data agent, for the chatbot. And you you see kind of 2 halves of this market right now. Right? You see the BI platforms, they're kinda bolting on these, you know, agents for retrieving data or the chatbots for retrieving data. And you see, a lot of these, you know, chatbot only tools that are focused on writing notebooks. And, like, CodeInterpret is 1 of them, but there's all these, you know, tools for writing analytics when you upload a CSV. And those it feels strange to me that those are 2 separate things. That should be an end to end workflow.
And, you know, now in analytic, you can actually, you know, pull governed data from from the BI tool, and that will be, you know, directly used with the advanced analytics tool to write, you know, Python against it and produce an end to end result. And I don't think it's it's not that far off. I mean, this is kind of, like, you know, running now, but it's something that we're really excited about.
[00:50:36] Unknown:
Are there any other aspects of this concept of data agents and AI agents and the ways that you are applying that functionality to Zenlytic and the overall space of business intelligence that we didn't discuss yet that you'd like to cover before we close out the show?
[00:50:52] Unknown:
No. I think that's I think that's pretty much everything. The main the main, like, TLDR is that we're really working to make this a coworker for you. Someone that you can delegate some tasks to, have it just go off, accomplish the things you need, give you the information you need to make good decisions. That's that's that's what we're laser focused on doing.
[00:51:09] Unknown:
Yeah. I would just say the only thing is I'd like to extend an open invitation to anyone to to geek out about this technology. If you are, you know, building with this, I mean, my recommendations would be first, I wouldn't start, you know, an AI project in 2024 without making it a data agent project. Then I think the secret to making data agents work well is providing a really great environment for them to do their job in, I think are 2 of the takeaways on how to make these work. But if anyone wants to chat more about that, please hit us up.
[00:51:38] Unknown:
Alright. Well, for anybody who does want to follow-up on that offer and keep track of the work that you and your teams are doing, I'll have you add your preferred contact information to the show notes. And as the final question, I'd like to get your perspective on what you see as being the biggest gap in the tooling or technology that's that's available for data management, and I'll also say or AI today?
[00:51:58] Unknown:
I think I'll I'll give I'll give 2 quick ones. On AI, it's probably it's it's mostly about, like, the evaluations of the models. Like, it's very difficult to evaluate non deterministic things. And that's that's a big gap in the tooling right now. Like, there's some, you know, early products that are starting to do a good job at it, but definitely definitely big gap on the AI side is in the evaluations. On the data side, I think there's a really exciting like, a really exciting products to be built, in the in the, like, transformation layer. Because there's a lot of grungy work in data transformation if you've done that before. Like, it's just, you know, you're you're pulling information. You're like, why is this order ID not showing up in this table, but it's showing up in the previous 1? Is it a timing thing? Is it, you know, some other filter that's being applied? Like, there's just so much grungy work of, like, running the very same query with slightly different order IDs over and over and over again. Pushing that work off to a data agent to let it just iterate on that and then come back and be like, this is why I think the thing is not present.
That would be that would be incredible. So I think that's that's a big opportunity that I'm sure the big, you know, transformation providers are all working on, but that's a really exciting 1.
[00:53:06] Unknown:
Alright. Well, thank you both for taking the time today to join me and share the work that you've been doing at Zenlytic and your experiences of building these AI agents. Definitely a very interesting space. It's great to hear people who are pushing the forefront of that. So I appreciate all the time and energy that you're both putting into making business intelligence a more self serve experience, and I hope you enjoy the rest of your day.
[00:53:28] Unknown:
Thanks, Tobias. Thanks for having us on. Thanks a ton, Tobias. Been been a pleasure.
[00:53:38] Unknown:
Thank you for listening. Don't forget to check out our other shows, podcast.init, which covers the Python language, its community, and the innovative ways it is being used, and the Machine Learning Podcast, which helps you go from idea to production with machine learning. Visit the site at dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, and read the show notes. And if you've learned something or tried out a project from the show, then tell us about it. Email hosts at data engineering podcast.com with your story. And to help other people find the show, please leave a review on Apple Podcasts and tell your friends and coworkers.
Introduction and Guest Introduction
Overview of Zenlytics and AI Integration
Conversational Approach and Data Agents
AI Journey and Architectural Changes
Data Agents vs Chatbots
Supporting Infrastructure for AI Models
Data Modeling and Context in AI
Customer Education and Expectations
Guidance in Decision Making with AI
Visualization and Multimodality in AI
Innovative Uses of AI and Data Agents
Future Evolution of AI Tools
Lessons Learned in Building AI Systems
Scalability and Cost of AI Inference
When AI Agents Are Not the Right Choice
Future Plans and Exciting Projects
Final Thoughts and Open Invitation
Biggest Gaps in AI and Data Management Tools