💹 Generative AI for emergent markets 📈

My father’s career in Geotechnical Engineering is nothing short of legendary. With over 40 years in industry, he's not exactly settling into a quiet retirement. Instead, he's still a go-to consultant in his field. His footprint at Teks Geotechnical Consulting is enormous, spanning over 400 diverse projects—from safeguarding coastlines to engineering intricate infrastructures. He's not a one-trick pony; he's left his mark across multiple sectors.

For years, my siblings and I have served on the company's board. I have had the unique opportunity to serve as Managing Director from an age that would surprise most—a story that deserves its own spotlight. While my younger self couldn't quite grasp my dad's ambitious vision for me, the path has been nothing short of transformative.

As time advanced, my dad has increasingly leaned on me for strategic oversight. While I've got the business aspects sorted, proficiency in geotechnical engineering—gained through years of hands-on exposure—still pales in comparison to my dad's unparalleled mastery.

So naturally, it's got me noodling over some big questions:

  • How can I supercharge machine learning to not just scan TEKS’ 400+ project history, but also spit out actionable insights for new initiatives?

  • What's the tech blueprint for bottling my dad's geotechnical wizardry into a slick, offline-friendly system that syncs with my day-to-day work?

  • With spotty internet and diverse hardware out there, what tech choices will make this system a reliable workhorse on even a mid-tier laptop?

Big questions? Absolutely.

Introducing Teks Chat: a sleek, data-smart hub that not only lets me channel my dad's deep-seated expertise, but also dishes out actionable insights for our next strategic plays—all while humming fully offline on my M1 Mac.

Catch the magic in motion in the accompanying video:

Intrigued by the wizardry? Check out the link below for the nitty-gritty using the link below: Teks Chat using Llama2 with LocalGPT on Apple Silicon

Key Takeaways

  1. Bridging Generations with AI: Walking a similar path as me? Good news: advances in machine learning have made it a breeze to digest hundreds of your company docs and spit out valuable insights for your next big move, anytime and anywhere.

  2. Unlocking Next-Level AI on Everyday Hardware: If TEKS Chat can hum on an Apple M1/M2, just think what it could do with a beefed-up NVIDIA or AMD GPU! This isn't just nerd-speak. Right now, Llama 2 is the brains behind Meta's new Rayban shades, and large language models are the secret sauce in Apple's iOS 17 keyboard improvements. The kicker? Soon, offline LLMs will supercharge Apple Vision Pro's search and reasoning, likely running on a chip that's cousins with the M1/M2!

  3. Meta’s AI Landscape: Meta's modular, multi-agent strategy has the potential to enhance how we think about AI. We're not just talking minor adjustments; we're on the edge of a massive transition from one-size-fits-all AI to custom-built agents, specialized for unique tasks and needs. Take WhatsApp's new AI Chats product as a case in point. AI agents in WhatsApp? Yep, it sounds unconventional, but it's happening. 🤷🏾‍♀️

  4. Unleash Hyper-Personalization via cloud APIs: If you can afford even a touch of online access, think about opting for cloud-based Large Language Models like OpenAI, instead of sticking with an offline setup like mine using Llama2. This shift will turbocharge your system's performance, responsiveness, and overall user experience. It's a game-changer.

  5. Mastering Context: The Unsung Genius of Large Language Models: The uncanny ability of ChatGPT to leverage historical context for laser-focused replies isn't just a neat feature—it's groundbreaking. You'll never truly appreciate the wizardry of computational efficiency in systems like ChatGPT until you've built your own Large Language Model. I'm endlessly in awe.

  6. The Power and Potential of Open Source: Open source isn't just a nice-to-have—it's a game-changer that's turbocharging the evolution of Large Language Models. While I've laid out a streamlined setup tailored to my needs, the landscape of possibilities is vast and untapped. Whether it's optimizing through fine-tuning or exploring a treasure trove of specialized models and utilities (think Open Interpreter for on-the-fly Mac code execution), we're just scratching the surface.

Previous
Previous

📸 The Challenges of a Traveling Photographer 🌍

Next
Next

How I Stay Organized