Tips

August 1, 2024

4 mins

AI Overview: Your Weekly AI Briefing

Hello Niuralogists!

Welcome to this week’s edition, where we explore the latest advancements in artificial intelligence. We’ll delve into how these innovations impact different facets of our lives—from the workplace and business environments to policies and personal experiences. This issue features intriguing updates, such as Mistral’s Large 2 taking on AI giants and the self-powered 'bugs' that skim water to gather environmental data.

For more in-depth coverage, keep reading…

Mistral's Large 2 Challenges AI Giants

Mistral has unveiled Large 2, a new AI model that claims to match or exceed the performance of recent offerings from OpenAI and Meta, despite having significantly fewer parameters. Large 2 features 123 billion parameters, less than a third of Meta's Llama 3.1 with 405 billion parameters, yet it outperforms it in code generation and math. The model also offers a 128,000 token context window and enhanced multilingual support for 12 languages and 80 coding languages. Mistral asserts that Large 2 reduces hallucinations and delivers more concise responses than leading AI models. The model is available for trial on Le Chat and can be used on major cloud platforms, though a paid license is required for commercial use. This is significant because, at just one-third the size of Llama 3.1 405B and with performance benchmarks comparable to GPT-4, Mistral Large 2 is a notable achievement. With two GPT-4 level open models released in just two days, the competitive pressure has intensified for closed-AI leaders like OpenAI, Anthropic, and Google.

Self-Powered 'Bugs' Skim Water to Collect Environmental Data

Researchers at Binghamton University, State University of New York, have developed a self-powered "bug" that can skim across water, potentially revolutionizing aquatic robotics. This innovation, which operates on ocean bacteria, offers a reliable alternative to solar, kinetic, or thermal energy systems, making it more effective under adverse conditions. The device uses a Janus interface, hydrophilic on one side and hydrophobic on the other, to absorb and retain nutrients from water, fueling bacterial spore production. The bacteria generate power by transitioning between vegetative cells and spores, depending on environmental conditions, thus extending the device's operational life. 

Source: Pexels

Amazon Aims to Surpass Nvidia with More Affordable, Faster AI Chips

Amazon is striving to outpace Nvidia in the AI chip market by developing cheaper and faster AI chips at its lab in Austin, Texas. Engineers tested a new server design using Amazon's AI chips on July 26th, aiming to reduce reliance on Nvidia's costly chips that power Amazon Web Services (AWS). The initiative seeks to offer customers affordable solutions for complex calculations while maintaining Amazon's competitiveness in cloud computing and AI. The latest AI chips, Trainium and Inferentia, promise a 40-50% improvement in price-performance ratio compared to Nvidia solutions. AWS, a significant revenue source for Amazon, used 250,000 Graviton chips and 80,000 custom AI chips during Prime Day, achieving record sales. As Amazon ramps up AI chip development, Nvidia remains competitive with the upcoming release of its Blackwell chips, promising significant performance gains and maintaining a strong client base including Amazon, Google, Microsoft, OpenAI, and Meta.

Open-Source AI Narrowing Gap with Top Proprietary Systems

Artificial intelligence startup Galileo released a comprehensive benchmark revealing that open-source language models are rapidly closing the performance gap with proprietary counterparts. The second annual Hallucination Index from Galileo evaluated 22 leading large language models, showing significant improvement in open-source models over just eight months. This trend could democratize advanced AI capabilities, lowering barriers to entry for startups and researchers while pressuring established players to innovate. Anthropic’s Claude 3.5 Sonnet topped the index, indicating a shift in the AI landscape. The index also highlighted the cost-effectiveness of models like Google’s Gemini 1.5 Flash, emphasizing the importance of balancing performance with affordability for businesses deploying AI at scale.

Source: Pexels

OpenAI Unveils New AI Search Engine

OpenAI has unveiled SearchGPT, an AI-powered search engine that leverages advanced AI models and Internet data to deliver timely, relevant answers, positioning itself as a direct competitor to Google's search dominance. The prototype organizes search results into summarized snippets with attribution links and supports follow-up questions, echoing features from AI startup Perplexity. Powered by GPT-4, SearchGPT will initially be available to 10,000 test users, with plans to integrate its features into ChatGPT in the future. To access SearchGPT, users must log into their ChatGPT account and join the waitlist. This development could disrupt the search industry, altering how users engage with online information, and raising concerns about data privacy, traditional SEO practices, and the impact on content creators.

Newsletter

📬 Receive our amazing posts straight to your inbox. Get the latest news, company insights, and Niural updates.

Thank you! Your message has been received!
Oops! Something went wrong. Please fill in the required fields and try again.

Q&Ai

Anthropic vs. Google: Who’s Leading in the Fight Against AI Hallucinations?

Galileo, a prominent developer in generative AI for enterprise use, has published its latest Hallucination Index, evaluating 22 leading generative AI language models (LLMs) from major companies like OpenAI, Anthropic, Google, and Meta. This year’s index has expanded to include 11 new models, reflecting the rapid growth in both open- and closed-source LLMs. The index uses Galileo’s proprietary metric, context adherence, to assess output accuracy across varying input lengths, aiding enterprises in balancing cost and performance. Key findings highlight Anthropic’s Claude 3.5 Sonnet as the top overall performer, Google’s Gemini 1.5 Flash as the most cost-effective, and Alibaba’s Qwen2-72B-Instruct as the best open-source model.

Can Randomization Improve Fairness in AI-Based Scarcity Allocations?

A recent study highlights that incorporating structured randomization into machine-learning models used for resource allocation can enhance fairness by addressing uncertainties and biases inherent in these systems. Researchers from MIT and Northeastern University have demonstrated that randomizing decisions, such as those in job candidate rankings or kidney transplant prioritizations, can prevent systemic biases and ensure a more equitable distribution of opportunities. Their new framework suggests using weighted lotteries to adjust the degree of randomization based on decision uncertainty, thereby improving fairness without compromising model efficiency. This approach aims to balance the trade-off between fairness and utility, potentially revolutionizing how resources are allocated in various fields, from job recruitment to medical care.

Tools

👨‍💻Claude Engineer is an interactive command-line interface powered by Claude 3.5 models.

💻 Elementor AI is an AI website builder for WordPress.

🎥Haiper AI 1.5 is a free-to-try AI video generator with up to 8 second generations.

🚀Prodia adds generative AI to your app with one API.

👨‍🎤CharacterGen is a 3D character generation from a single image.

Follow us on Twitter and LinkedIn for more content on artificial intelligence, global payments, and compliance. Learn more about how Niural uses AI for global payments and team management to care for your company's most valuable resource: your people.

See you next week!

Request a demo