•
December 21, 2023
•
6 mins
Hello Niuralogists!
Step into this week's edition, your gateway to the ever-evolving realm of AI's most recent strides. Our main goal is to unravel these breakthroughs, scrutinizing their impact on workplaces, businesses, policies, and individuals. In this issue, we explore a variety of fascinating updates, such as the OpenAI Grants Board's Authority to Safeguard Against High-Risk AI, a versatile solution to elevate animation and empower artists, and Sam Altman dispelling rumors about the arrival of GPT-4.5.
For a more in-depth understanding, keep on reading...
OpenAI has revealed a new safety preparedness framework for AI management, empowering its recently restructured board to countermand executive decisions regarding releasing potentially risky AI models. The framework includes a dedicated "Preparedness" team tasked with ongoing assessment of AI capabilities and risks, guiding issued reports. Notably, the board now has the authority to override rollouts deemed safe if persistent concerns arise, addressing challenges stemming from OpenAI's recent leadership crises. This formalized framework introduces heightened accountability and more rigorous vetting processes, encompassing security, weaponry, and persuasive threats. Amidst the announcement, an OpenAI employee humorously quipped about the arrival of AGI. While the past board controversies leaned towards power dynamics rather than safety, the emphasis on safety in this framework represents a positive stride towards ethically navigating AI, especially with the impending era of Artificial General Intelligence.
Artists shaping the narratives of animated characters in movies and video games now have a more versatile solution at their disposal, courtesy of a novel technique developed by MIT researchers. The method involves the generation of mathematical functions called barycentric coordinates, which dictate the bending, stretching, and movement of 2D and 3D shapes in space. This tool enables artists to selectively control these functions, allowing, for instance, the tail of a 3D cat to move in a way that aligns with the artist's envisioned aesthetic. The researchers aimed for a comprehensive approach, empowering artists to design or select smoothness energies for any shape, previewing deformations, and allowing them to choose the optimal smoothness energy according to their preferences.
Speculation surrounding the potential release of GPT-4.5 created a social media frenzy over the weekend, spurred by reports of users receiving responses hinting at the model's active deployment, despite OpenAI CEO Sam Altman dismissing previously circulated screenshots. The initial rumors, sparked by a 'leaked' image showcasing GPT-4.5's advanced capabilities and revised pricing, were met with Altman's denial and observations of inconsistencies in the leaked screenshots. Despite Altman's dismissal, anonymous sources suggested his response might be a form of 'trolling.' Users subsequently reported instances of ChatGPT responding with 'GPT 4.5-turbo,' although an OpenAI employee dismissed it as a 'weird' hallucination. Despite official denials, multiple reports indicated a notable improvement in ChatGPT's performance, while the official ChatGPT X account added cryptic emojis to the speculation. While a definitive release of a powerful new ChatGPT version might not be imminent, the improved performance suggests ongoing upgrades.
Tesla has revealed a significantly overhauled prototype of its humanoid Optimus robot, boasting substantial improvements in speed, dexterity, and balance that propel it closer to futuristic capabilities. The Optimus Gen-2 sheds more than 20 pounds, enhancing mobility, and achieves a 30% increase in walking pace through new feet and smoother movements. Innovative finger sensors enable delicate object manipulation, showcased by the robot delicately handling an egg in a demonstration. Additional upgrades, such as a more flexible neck, integrated electronics, and quicker hands with tactile feedback, aim to emulate human capabilities. Notably, the Optimus 2 exhibits remarkably smooth dance movements in real-time footage. This progress signifies a significant leap toward the realization of multimodal, hyper-capable humanoid robots, surpassing the realm of imagination seen in movies.
Recent research conducted by teams from Carnegie Mellon University and BerriAI has found that Google's newly unveiled language model, Gemini, fails to surpass OpenAI's GPT-3.5 Turbo in various tasks. Despite the glossy demo video released by Google showcasing Gemini, the company faced criticism for staged interactions between the presenter and the AI. The study reveals that Gemini Pro, the most powerful version available to consumers, lags behind OpenAI's older and supposedly less advanced GPT-3.5 Turbo in terms of performance across most tasks. Notably, Google's latest language model, despite months of development, does not outshine OpenAI's older and freely accessible model. This comparison underscores the effectiveness of OpenAI's GPT-3.5 Turbo, which has been available to ChatGPT Plus and Enterprise subscribers for an extended period, while GPT-4 and GPT-4V (the multimodal offering) have been accessible throughout the year for these subscribers.
📬 Receive our amazing posts straight to your inbox. Get the latest news, company insights, and Niural updates.
Research conducted by DTU, University of Copenhagen, ITU, and Northeastern University in the US demonstrates that employing large datasets of personal information and training 'transformer models,' similar to those used in language processing (such as ChatGPT), enables the systematic organization of the data. These models can then predict various life events, including estimating the time of death. Following an initial training phase where the model learns data patterns, it has proven to surpass other advanced neural networks in forecasting outcomes such as personality traits and predicting time of death with remarkable accuracy.
Artificial intelligence tools, with promising applications spanning autonomous vehicles and medical image interpretation, are facing a higher susceptibility to targeted attacks than initially thought, as highlighted by a recent study. The vulnerability arises from "adversarial attacks," wherein individuals manipulate data inputs to confuse AI systems, potentially leading to flawed decisions. An example involves placing a specific sticker on a stop sign to make it appear invisible to the AI system. Co-author Tianfu Wu, an associate professor at North Carolina State University, notes that while AI systems generally recognize stop signs despite alterations, vulnerabilities can be exploited by attackers to cause accidents. The study, examining the prevalence of adversarial vulnerabilities in deep neural networks, revealed a more common occurrence than previously believed. To assess vulnerability, the researchers developed QuadAttacK, a software capable of testing any deep neural network for such vulnerabilities.
A new and powerful AI feature allows you to obtain instant summaries for any webpage with just a click. To activate this feature, head to labs.google.com, click 'Get Started' under 'Search Powered by Generative AI,' and toggle on 'SGE while browsing.' After restarting Chrome, locate the 'G' icon in your plugin bar; if it's not immediately visible, access it through the side panel icon in the top right, select 'search,' and pin it to the top bar. Once set up, visit the desired website, click the 'G' icon, and select 'generate' to receive quick and concise key points powered by Google Bard.
📈 Osum performs deep market research in seconds
❤️ Digi is an AI romantic companionship
📸 Fal is a photo booth powered by AI
🔥 Roast My Web enhances your website with intelligent AI critiques
🧠 InterviewJarvis is an AI-powered tech job interview preparation