•
August 10, 2023
•
6 mins
Hey Niuralogists!
Another week has flown by in the artificial intelligence space and we’re here with a collection of the most groundbreaking updates. As AI continues to improve, we’re committed to ensuring that you’re not only informed but also ready to engage with incoming updates.
In an attempt to defend its 80% market share in the AI hardware space from competitors such as Google and Amazon, Nvidia has introduced its latest chip: the GH200. This new chip is specifically designed to run artificial intelligence models more efficiently. It is equipped with the same GPU as Nvidia's premier AI chip, the H100, but it pairs this GPU with an impressive 141 gigabytes of advanced memory and a 72-core ARM central processor. Nvidia's CEO, Jensen Huang, highlighted that this chip is crafted for the expansion of global data centers and is set to be available in the second quarter of the upcoming year. The GH200 is particularly optimized for inference, a process in AI where models make predictions or generate content. With its enhanced memory capacity, the GH200 can accommodate larger AI models on a single system, making the inference process more efficient and cost-effective. This development is expected to significantly reduce the costs associated with running large language models.
Meta is getting ready to introduce a new line of AI-powered chatbots with distinct personas and is expected for a release as early as next month. These chatbots are designed to mimic human-like interactions, offering users a unique and engaging experience. For example, users might interact with a chatbot that advises on travel in the style of a laid-back surfer or even Abraham Lincoln. This initiative is part of Meta's broader strategy to boost user engagement and provide more personalized content and recommendations on its platforms. However, the introduction of these advanced chatbots could lead to more comprehensive data collection which potentially raises concerns about user privacy. It's worth noting that Meta isn't alone in its pursuit of AI-driven interactions. Snapchat, for example, has already rolled out its own AI chatbot, "My AI", powered by OpenAI's GPT technology. As the tech industry continues to evolve, Mark Zuckerberg has hinted at a future where AI plays a central role in Meta's product offerings, suggesting that this chatbot release is just the tip of the iceberg.
OpenAI has recently filed a trademark application for GPT-5 with the United States Patent and Trademark Office (USPTO). The patent application covers a broad range of software relating to their chatbot including artificial production of human speech and the improvement of natural language processing. It also includes other software capabilities for the translation of text or speech from one language to another, sharing datasets for machine learning models, predictive analytics, and building language models. Some other features include converting audio into text and voice recognition.
Additionally, OpenAI doesn’t seem to be stopping at software development as they also plan to venture into the Software as a Service (SaaS) domain by offering advanced functionalities to businesses and developers. This patent application isn’t OpenAI’s first time trademarking their innovations as they filed a similar application for GPT-4 back in March 2023. Despite the application being filed and accepted, Sam Altman, OpenAI’s CEO, has stated that GPT-5 is still in the development stage and has a long way to go. Altman emphasized the meticulous work involved and the importance of safety audits.
Uber is in the process of developing an AI chatbot similar to ChatGPT to be integrated into its app, as revealed by CEO Dara Khosrowshahi. This move comes as Uber's competitors, DoorDash and Instacart, are also taking steps in the same direction. DoorDash is currently working on a system named DashAI, aimed at expediting the ordering process. On the other hand, Instacart has already rolled out a bot powered by OpenAI's technology, designed to address customer queries efficiently. Khosrowshahi emphasized that Uber's foray into machine learning isn't recent and that the company has been leveraging the technology for several years. The integration of such AI chatbots by these companies signifies a trend in the tech industry, aiming to enhance user experience and streamline operations through advanced AI solutions.
A series of significant updates are coming to ChatGPT with the goal of enhancing its user experience and functionality. Since Bing Chat has been frequently updating its features, ChatGPT plans to catch up with its upcoming upgrades. Logan Kilpatrick, a developer relations expert at OpenAI, recently unveiled a list of new ChatGPT features that are expected to roll out this week. Some of the notable updates include example prompts to assist users, suggested replies that automatically generate follow-up questions, the default use of GPT-4 for improved output, and the ability for beta users to upload multiple files into the Code Interpreter. Additionally, users will benefit from staying logged in, eliminating the need to repeatedly enter credentials, and new keyboard shortcuts for enhanced navigation. Many of these features, such as suggested replies and GPT-4 by default, have been popular on Bing Chat, and their introduction to ChatGPT has been eagerly awaited. These updates are designed to make interactions with ChatGPT more intuitive, efficient, and user-friendly.
📬 Receive our amazing posts straight to your inbox. Get the latest news, company insights, and Niural updates.
The potential of AI in treating paralysis was brought to light when researchers from Northwell Health’s Feinstein Institutes achieved a medical marvel. The researchers used AI-powered brain implants to restore movement and sensation for Keith Thomas, a man paralyzed from the chest down. This was made possible through an innovative “double neural bypass” procedure in which microchips were implanted into Thomas’ brain and, with the help of AI algorithms, reconnected his brain to his body and spinal cord. When Thomas thought of moving his arm, the AI-drive system translated his thought into actions bypassing the spinal injury which allowed Thomas to move his arm. Thomas is also able to feel the sensation of touch in his fingers which he hasn’t been able to for years. Moreover, within just 4 months, Thomas doubled his arm strength, showcasing the transformative potential of this technology. Despite these results being promising, it’s important to understand that this doesn’t represent a cure but rather a workaround that leverages AI to alleviate the effects of spinal injury. The broader implications of this technology are vast, with potential applications that could offer life-changing mobility and independence to many.
IBM and NASA aim to harness AI foundation models to analyze the vast geospatial data collected by NASA, primarily to gain deeper insights into Earth’s climate. By 2024, NASA anticipates a whopping 250,000TB of data from new satellite missions, which presents a significant challenge for scientists to analyze manually. This is where AI steps in. The IBM-NASA partnership has led to the release of a large geospatial foundation model on the open-source AI platform, Hugging Face, aiming to democratize access to this technology. The primary goal? To provide researchers with a more streamlined and efficient method to interpret and derive meaningful insights from these colossal datasets. IBM's initiative is just a glimpse of how AI can be a game-changer in climate science. By analyzing satellite data, AI models can identify changes in natural disasters, crop yields, and wildlife habitats, thereby assisting researchers in understanding Earth’s intricate environmental systems. With AI's ability to process and analyze vast amounts of data at unprecedented speeds, it offers a promising tool in the global effort to understand and mitigate the impacts of climate change.
The transformation of the toy manufacturing industry by artificial intelligence is becoming more prevalent, especially with the rise of "smart toys" and AI-integrated educational tools. These advancements suggest a significant shift in how children will play and learn. Smart toys, powered by AI, can now recognize and adapt to a child's voice, facial expressions, and gestures, offering a more immersive and personalized play experience. Some even evolve based on interactions, ensuring a unique playtime every session. Beyond just play, AI-driven educational toys are designed to teach a spectrum of subjects from basic language skills to intricate STEM concepts, adjusting to a child's learning pace and style. This means children aren't just playing; they're learning in a tailored, interactive manner. Furthermore, as AI toys become more advanced, they promise more dynamic play experiences that can stimulate creativity, problem-solving, and social skills. While the full impact of AI on children's play and learning is still unfolding, it's evident that the future of play is set to be more interactive and educational.