• Redefining Conversational AI: Ronald Ashri on Building OpenDialog
    Dec 10 2024

    In this episode of How We Made That App, host Madhukar Kumar, CMO of SingleStore, delves into the fascinating world of conversational AI with Ronald Ashri, co-founder of OpenDialog. Ronald shares his journey from studying multi-agent systems to creating OpenDialog, a platform designed to revolutionize how humans and AI collaborate.

    Discover how Ronald’s expertise in autonomous agents and conversational design is shaping AI tools for regulated industries. From balancing creativity and control in conversational frameworks to leveraging knowledge graphs and retrieval-augmented generation (RAG), this episode explores the cutting-edge innovations driving enterprise AI forward.

    Ronald also shares his thoughts on the challenges of building trustworthy AI, preventing hallucinations, and the future of large language models. Tune in to learn how a vision for smarter human-AI interactions is becoming a reality.

    Show More Show Less
    1 hr and 2 mins
  • Beyond Alexa: The Future of Voice, AI, and Innovation with Igor Jablokov
    Nov 5 2024

    In this episode of How We Made That App, host Madhukar Kumar, CMO of SingleStore, sits down with Igor Jablokov, the pioneering mind behind Amazon Alexa and CEO of Pryon. Igor shares his journey from the early days of voice recognition technology to leading innovations in artificial intelligence. Discover the story behind Alexa’s creation, the technical challenges and breakthroughs in building a voice-powered assistant, and the forward-thinking solutions that reshaped how we interact with technology. Igor also delves into the ethical considerations and governance of AI, tackling common misconceptions and offering a glimpse into the future of intelligent systems. Tune in to hear how a vision for voice technology evolved into a transformative force in AI.

    Key Takeaways:

    • Igor Jablokov’s journey from Alexa co-creator to CEO of Pryon

    • The inception and development of Amazon Alexa

    • Technical insights into overcoming challenges in voice recognition

    • Ethical considerations and governance in the field of AI

    • Future perspectives on the role of AI in everyday life

    Subscribe now and don’t miss an episode of How We Made That App, where we explore the stories behind the most impactful apps and the innovators who make them happen.

    Links

    • Connect with Igor
    • Visit Pryon
    • Connect with Madhukar
    • Visit SingleStore
    Show More Show Less
    56 mins
  • How Flowd is Revolutionizing Water Management with CTO Marc Locchi
    Aug 20 2024

    In this episode of "How We Made That App," host Madhukar Kumar, CMO of SingleStore, sits down with Marc Locchi, the CTO of Flowd, a groundbreaking water management company from Australia. Marc shares his journey from a frustrated programmer to leading the development of Flowd, an app that’s transforming how businesses detect and manage water leaks in real-time. Discover the story behind Flowd’s creation, the technical challenges faced, and the innovative solutions employed, including the use of Laravel and AWS for scalability. Marc also delves into the impact Flowd has had on saving water and reducing costs for clients, sharing fascinating success stories along the way. Tune in to learn how a chance meeting and a passion for technology led to a solution that’s making waves in water conservation.

    Key Takeaways:

    • Marc Locchi's journey to becoming CTO of Flowd • The inception and development of Flowd • Technical insights on building a scalable application with Laravel • Real-world impact stories of Flowd’s water management solutions • Collaboration with AWS to enhance performance and scalability

    Subscribe now and don’t miss an episode of "How We Made That App," where we explore the stories behind the most innovative apps and the people who make them happen.

    Links

    Connect with Marc

    Visit Flowd

    Connect with Madhukar

    Visit SingleStore

    Show More Show Less
    44 mins
  • How Flowise is Changing the GenAI App Revolution
    Apr 30 2024

    Join us on this intriguing journey where host Madhukar Kumar uncovers the story of FlowiseAI, an AI-powered chatbot tool that soared to fame in the open-source community. Henry Heng, the Founder of FlowiseAI, shares the inception of FlowiseAI was out of the need to streamline repetitive onboarding queries. Listen in as Henry shares the unexpected explosion of interest following its open-sourcing and how community engagement, spearheaded by creators like Leon, has been pivotal to its growth. The conversation takes a fascinating turn with the discussion of Flowise’s versatility, extending to AWS and single store's creative uses for product descriptions, painting a vivid picture of the tool's expansive potential.

    Madhukar and Henry discuss the dynamic realm of data platforms, touching on the integration of large language models into developer workflows and the inevitable balance between commercial giants and open-source alternatives. Henry brings a personal perspective to the table, detailing his use of Fowise for managing property documentation and crafting an accompanying chatbot. Henry also addresses the critical issue of data privacy in enterprise environments, exploring how Flowwise approaches these challenges. The strategy behind monetizing Flowwise is also revealed, hinting at an upcoming cloud-hosted iteration and its future under the Y Combinator umbrella. Don't miss out on this insightful conversation on how FlowiseAI is revolutionizing GenAI!

    Key Quotes:

    • “What I've experienced is that first you go through the architect. So the architect of companies and the senior teams as well will decide what architecture that we want to go with. And usually, I was part of the conversation as well. We tend to decide between, NoSQL or SQL depending on the use cases that we are using. For schema that are like fast changing schema or inconsistent, not like tabular structured data, we often use NoSQL or MongoDB. And for structured data, we use MySQL from my previous company. That's how we kind of like decide based on the use cases.”
    • “Judging from the interactions that I have with the community, I would say 80 percent of them are using OpenAI and OpenSource is definitely catching up but is still lagging behind OpenAI. But I do see the trend that is starting to pick up, like especially you have the MixedRoute, you have Lama2 as well. But the problem is that I think the cost is still the major factor. Like, people tend to go to which large language models has the lowest cost, right?”

    Timestamps

    • (00:00) Building FlowiseAI to open source
    • (5:07) Innovative Use Cases of Flowwise
    • (10:15) Types of users of Flowise
    • (19:39) Database Architecture and Future Technology
    • (32:30) Quick hits with Henry

    Links

    Connect with Henry

    Visit FlowiseAI

    Connect with Madhukar

    Visit SingleStore

    Show More Show Less
    37 mins
  • Pioneering AI Teaching Models with Dev Aditya
    Apr 16 2024

    On this episode of How We Made That App, join host Madukar Kumar as he delves into the groundbreaking realm of AI in education with Dev Aditya, CEO and Co-Founder of the Otermans Institute. Discover the evolution from traditional teaching methods to the emergence of AI avatar educators, ushering in a new era of learning.

    Dev explores how pandemic-induced innovation spurred the development of AI models, revolutionizing the educational landscape. These digital teachers aren't just transforming classrooms and corporate training. They're also reshaping refugee education in collaboration with organizations like UNICEF.

    Dev’s deep dive into the creation and refinement of culturally aware and pedagogically effective AI. He shares insights into the meticulous process behind AI model development, from the MVP's inception with 13,000 lines of Q&A to developing a robust seven billion parameter model, enriched by proprietary data from thousands of learners.

    We also discuss the broader implications of AI in data platforms and consumer businesses. Dev shares his journey from law to AI research, highlighting the importance of adaptability and logical thinking in this rapidly evolving field. Join us for an insightful conversation bridging the gap between inspiration and innovation in educational AI!

    Key Quotes:

    • “People like web only and app only, right? They like it. But in about July this year, we are launching alpha versions of our products as Edge AI. Now that's going to be a very narrowed down version of language models that we are working on right now, taking from these existing stacks. So that's going to be about 99 percent our stuff. And it's, going to be running on people's devices. It's going to help with people's privacy. Your data stays in your device. And even as a business, it actually helps a lot because I am hopefully, going to see a positive difference in our costs because a lot of that cloud costs now rests in your device.”
    • “My way of dealing with AI is, is narrow intelligence, break a problem down into as many narrow points as possible, storyboard, storyboard color, as micro as possible. If you can break that down together, you can teach each agent and each model to do that phenomenally well. And then it's just an integration game. It will do better than a human being in, you know, as a full director of a movie. Also, if you know, if you, from the business logic standpoint, understand what does a director do, it is possible, theoretically. I don't think people go deep enough to understand what a teacher does, or what a doctor is just not a surgeon, right? How they are thinking, what is their mechanism? If you can break that down. You can easily make, like, probably there are 46, I'm just saying, 46 things that a doctor does, right? If you have 46 agents working together, each one knowing that,be amazing. That's a different game. I think agents are coming.”

    Timestamps

    • (00:00) - AI Avatar Teachers in Education
    • (09:29) - AI Teaching Model Development Challenges
    • (13:27) - Model Fine-Tuning for Knowledge Augmentation
    • (25:22) - Evolution of Data Platforms and AI
    • (32:15) - Technology Trends in Consumer Business

    Links

    Connect with Dev

    Visit the Otermans Institute

    Connect with Madhukar

    Visit SingleStore

    Show More Show Less
    46 mins
  • Revolutionizing Analytics Through User Privacy with Jack Ellis
    Apr 2 2024

    In this episode of How We Made That App, host Madhukar welcomes Jack Ellis, CTO and co-founder of Fathom Analytics, who shares the inside scoop on how their platform is revolutionizing the world of web analytics by putting user privacy at the forefront. With a privacy-first ethos that discards personal data like IP addresses post-processing, Fathom offers real-time analytics while ensuring user privacy. Breaking away from the traditional cookie-based tools like Google Analytics. Jack unpacks the technical challenges they faced in building a robust, privacy-centric analytics service, and he explains their commitment to privacy as a fundamental service feature rather than just a marketing strategy.

    Jack dives into the fascinating world of web development and software engineering practices. Reflecting on Fathom's journey with MySQL and PHP, detailing the trials and tribulations of scaling in high-traffic scenarios. He contrasts the robustness of PHP and the rising popularity of frameworks like Laravel with the allure of Next.js among the younger developer community. Jack also explores the evolution from monolithic applications to serverless architecture and the implications for performance and scaling, particularly as we efficiently serve millions of data points.

    Jack touches on the convergence of AI with database technology and its promising applications in healthcare, such as enhancing user insights and decision-making. Jack shares intriguing thoughts on how AI can transform societal betterment, drawing examples from SingleStore's work with Thorn. You don’t want to miss this revolutionizing episode on how the world of analytics is changing!

    Key Quotes:

    • “When we started selling analytics people they were a bit hesitant to pay for analytics but over time people have started valuing privacy over everything And so it's just compounded from there as people have become more aware of the issues and people absolutely still will only use Google Analytics but the segment of the market that is moving towards using solutions like us is growing.”
    • “People became used to Google's opaque ways of processing data. They weren't sure what data was being stored, how long were they keeping the IP address for. All of these other personal things as well. And we came along and we basically said, we're not interested in that. tracking person A around multiple different websites. We're actually only interested in person A's experience on one website. We do not, under any circumstances, want to have a way to be able to profile an individual IP address across multiple entities. And so we invented this mechanism where the web traffic would come in and we'd process it and we'd work out whether they're unique and whatever else. And then we would discard the personal data.”
    • “The bottleneck for most applications is not your web framework, it's always your database and I ran through Wikipedia's numbers, Facebook's numbers and I said it doesn't matter, we can add compute, that's easy peasy, it's always the database, every single time, so stop worrying about what framework you're using and pick the right database that has proven that it can actually scale.”
    • “If you're using an exclusively OLTP database, you might think you're fine. But when you're trying to make mass modifications, mass deletions, mass moving of data, OLTP databases seem to fall over. I had RDS side by side with SingleStore, the same cost for both of them, and I was showing people how quickly SingleStore can do stuff. That makes a huge difference, and it gives you confidence, and I think that you need a database that's going to be able to do that.”

    Timestamps

    • (00:55) Valuing consumer’s privacy
    • (06:01) Creating Fathom Analytics' architecture
    • (20:48) Compounding growth to scale
    • (23:08) Structuring team functions
    • (25:39) Developing features and product design
    • (38:42) Advice for building applications

    Links

    Connect with Jack

    Visit Fathom Analytics

    Connect with Madhukar

    Visit SingleStore

    Show More Show Less
    41 mins
  • Revolutionizing Language Models and Data Processing with LlamaIndex
    Mar 19 2024
    On this episode of How We Made That App, host Madhukar Kumar welcomes Co-Founder and CEO of LlamaIndex, Jerry Liu! Jerry takes us from the humble beginnings of GPT Index to the impactful rise of Lamaindex, a game-changer in the data frameworks landscape. Prepare to be enthralled by how Lama Index is spearheading retrieval augmented generation (RAG) technology, setting a new paradigm for developers to harness private data sources in crafting groundbreaking applications. Moreover, the adoption of Lamaindex by leading companies underscores its pivotal role in reshaping the AI industry. Through the rapidly evolving world of language model providers discover the agility of model-agnostic platforms that cater to the ever-changing landscape of AI applications. As Jerry illuminates, the shift from GPT-4 to Cloud 3 Opus signifies a broader trend towards efficiency and adaptability. Jerry helps explore the transformation of data processing, from vector databases to the advent of 'live RAG' systems—heralding a new era of real-time, user-facing applications that seamlessly integrate freshly assimilated information. This is a testament to how Lamaindex is at the forefront of AI's evolution, offering a powerful suite of tools that revolutionize data interaction. Concluding our exploration, we turn to the orchestration of agents within AI frameworks, a domain teeming with complexity yet brimming with potential. Jerry delves into the multifaceted roles of agents, bridging simple LLM reasoning tasks with sophisticated query decomposition and stateful executions. We reflect on the future of software engineering as agent-oriented architectures redefine the sector and invite our community to contribute to the flourishing open-source initiative. Join the ranks of data enthusiasts and PDF parsing experts who are collectively sculpting the next chapter of AI interaction! Key Quotes: “If you're a fine-tuning API, you either have to cater to the ML researcher or the AI engineer. And to be honest, most AI engineers are not going to care about fine-tuning, if they can just hack together some system initially, that kind of works. And so I think for more AI engineers to do fine-tuning, it either has to be such a simple UX that's basically just like brainless, you might as well just do it and the cost and latency have to come down. And then also there has to be guaranteed metrics improvements. Right now it's just unclear. You'd have to like take your data set, format it, and then actually send it to the LLM and then hope that actually improves the metrics in some way. And I think that whole process could probably use some improvement right now.”“We realized the open source will always be an unopinionated toolkit that anybody can go and use and build their own applications. But what we really want with the cloud offering is something a bit more managed, where if you're an enterprise developer, we want to help solve that clean data problem for you so that you're able to easily load in your different data sources, connect it to a vector store of your choice. And then we can help make decisions for you so that you don't have to own and maintain that and that you can continue to write your application logic. So, LlamaCloud as it stands is basically a managed parsing and injection platform that focuses on getting users like clean data to build performant RAG and LLM applications.”“You have LLMs that do decision-making and tool calling and typically, if you just take a look at a standard agent implementation it's some sort of query decomposition plus tool use. And then you make a loop a little bit so you run it multiple times and then by running it multiple times, that also means that you need to make this overall thing stateful, as opposed to stateless, so you have some way of tracking state throughout this whole execution run. And this includes, like, conversation memory, this includes just using a dictionary but basically some way of, like, tracking state and then you complete execution, right? And then you get back a response.And so that actually is a roughly general interface that we have like a base abstraction for.”“A lot of LLMs, more and more of them are supporting function calling nowadays.So under the hood within the LLM, the API gives you the ability to just specify a set of tools that the LLM API can decide to call tools for you. So it's actually just a really nice abstraction, instead of the user having to manually prompt the LLM to coerce it, a lot of these LLM providers just have the ability for you to specify functions under the hood and if you just do a while loop over that, that's basically an agent, right? Because you just do a while loop until that function calling process is done and that's basically, honestly, what the OpenAI Assistance agent is. And then if you go into some of the more recent agent papers you can start doing things beyond just the next step chain of thought into every stage ...
    Show More Show Less
    41 mins
  • Data Dreams and AI Realities with Premal Shah
    Feb 20 2024
    In this engaging episode, host Madhukar Kumar dives deep into the world of data architecture, deployment processes, machine learning, and AI with special guest Premal Shah, the Co-Founder and Head of Engineering at 6sense. Join them as Premal traces the technological evolution of Sixth Sense, from the early use of FTP to the current focus on streamlining features like GitHub Copilot and enhancing customer interactions with GenAI. Discover the journey through the adoption of Hive and Spark for big data processing, the implementation of microservice architecture, and massive-scale containerization. Learn about the team's cutting-edge projects and how they prioritize product development based on data value considerations. Premal also shares valuable advice for budding engineers looking to enter the field. Whether you're a tech enthusiast or an aspiring engineer, this episode provides fascinating insights into the ever-evolving landscape of technology! Key Quotes: “What is important for our customers, is that 6sense gives them the right insight and gives them the insight very quickly. So we have a lot of different products where people come in and they infer the data from what we're showing. Now it is our responsibility to help them do that faster. So now we are bringing in GenAI to give them the right summary to help them to ask questions of the data right from within the product without having to think about it more or like open a support ticket or like ask their CSM.”“We had to basically build a platform that would get all of our customer's data on a daily basis or hourly basis and process it every day and give them insights on top of it. So, we had some experience with Hadoop and Hive at that time. So we used that platform as like our big data platform and then we used MySQL as our metadata layer to store things like who is the customer, what products are there, who are the users, et cetera. So there was a clear separation of small data and big data.”“Pretty soon we realized that the world is moving to microservices, we need to make it easy for our developers to build and deploy stuff in the microservice environment. So, we started investing in containerization and figuring out, how we could deploy it, and at that same time Kubernetes was coming in so with using docker and Kubernetes we were able to blow up our monolith into microservices and a lot of them. Now each team is responsible for their own service and scaling and managing and building and deploying the service. So the confluence of technologies and what you can foresee as being a challenge has really helped in making the transition to microservices.”“We brought in like SingleStore to say, ‘let's just move all of our UIs to one data lake and everybody gets a consistent view.’ There's only one copy. So we process everything on our hive and spark ecosystem, and then we take the subset of the process data, move it to SingleStore, and that's the customer's access point.”“We generally coordinate our releases around a particular time of the month, especially for the big features, things go behind feature flags. So not every customer immediately gets it. You know, some things go in beta, some things go in direct to production. So there are different phases for different features. Then we have like test environments that we have set up, so we can simulate as much as possible, uh, for the different integrations. Somebody has Salesforce, somebody has Mercado, Eloqua, HubSpot. All those environments can be like tested. ”“A full stack person is pretty important these days. You should be able to understand the concepts of data and storage and at least the basics. Have a backing database to build an application on top of it, able to write some backend APIs, backend code, and then build a decent looking UI on top of it. That actually gives you an idea of what is involved end to end in building an application. Versus being just focused on I only do X versus Y. You need the versatility. A lot of employers are looking for that.” Timestamps (00:23) Premal’s Background and Journey into Engineering (06:37) Introduction to 6sense: The Company and Its Mission (09:15) The Evolution of 6sense: From Idea to Reality (13:07) The Technical Aspects: Data Management and Infrastructure (18:03) Shifting to a micro-service-focused world (31:16) Challenges of Data Management and Scaling (38:26) Deployment Strategies in Large-Scale Systems (47:49) The Impact of Generative AI on Development and Deployment (55:18) The Future of AI in Engineering (01:01:07) Quick Hits Links Connect with Premal Visit 6sense Connect with Madhukar Visit SingleStore
    Show More Show Less
    1 hr and 3 mins