- AI Snapshot
- Posts
- AI's $600B Question
AI's $600B Question
InsightBase AI - https://insightbase.ai/ (Sponsor)
InsightBase allows you to use AI to chat with your database and build powerful analytics dashboards. Anyone from an organization can use InsightBase right away since it doesn't require any SQL or coding knowledge.
AI’s $600B Question | Sequoia Capital
David Cahn's article examines the growing gap in AI revenue expectations, now rising from a $200 billion shortfall to a $600 billion shortfall, despite the rapid advancement and investment in AI infrastructure by companies like Nvidia. With GPU supply shortages easing and Nvidia achieving significant market valuation, questions remain about the actual economic returns of such investments. OpenAI leads in AI-generated revenue, but overall, the market's value realization lags behind the massive capital expenditures. Cahn highlights the speculative nature of tech waves and the need for value-centric company-building in AI. (Read More)
AI Coding Software Magic Looking To Conjure Up $200M At $1.5B
Magic, a San Francisco-based AI firm focused on developing software-writing models, is reportedly seeking $200 million in new funding at a $1.5 billion valuation. Despite having neither revenue nor a product, the AI software sector remains hot, attracting investors like Jane Street Capital. Magic's last funding round was $117 million in February. Other notable fundraisers in the AI coding space include Builder.ai, Augment, and Cognition, indicating strong investor interest in AI-driven software development solutions. (Read More)
Here’s how you can build and train GPT-2 from scratch using PyTorch | Differ
DifferAI introduces its writing copilot, offering transparency by avoiding algorithms to help readers find what they want. Amit Kharel's article explains building a GPT-2 model using PyTorch, from a custom tokenizer to training and evaluating a simple language model. He provides code, a dataset, and a Jupyter Notebook for hands-on learning. The article guides through tokenization, data loading, and embedding, ending with a sample model's output. More details follow in Part 2. (Read More)
I received an AI email - Tim Hårek
Tim Hårek reports receiving an email from an "AI agent" named Raymond promoting a CMS called Wisp. Initially, Tim thought Raymond was a genuine reader interested in his blog, but upon investigating Raymond's claims and Wisp’s blog, he discovered the personalized email was AI-generated. This sparked a discussion about the ethical implications of AI-generated outreach and how it affects personal privacy and trust. Tim expresses frustration with such practices, considering them spam. (Read More)
Reasoning in Large Language Models: A Geometric Perspective
The paper "Reasoning in Large Language Models: A Geometric Perspective" by Romain Cosentino and Sarath Shekkizhar explores how large language models (LLMs) can improve their reasoning capabilities through a geometric lens. The authors connect the expressive power of LLMs to the density of their self-attention graphs, showing that a higher intrinsic dimension within these graphs correlates with greater reasoning ability. They support their claims with theoretical analysis and empirical evidence, linking their geometric framework to recent advancements in LLM reasoning methods. (Read More)