•
August 17, 2023
•
6 mins
Hey Niuralogists!
Hope you’ve had a great week! Once again it’s time to catch up on the latest news in the world of AI. With so much happening, we’ve searched through the latest news to bring you the most relevant updates. In this edition we’ll go over Anthropic’s new AI model, Google’s new project, OpenAI’s financial struggles, IBM’s new chip, and UK’s investment in AI healthcare.
Anthropic is known primarily for its AI model, Claude, which is able to compete with the likes of ChatGPT 3.5 and ChatGPT 4. Recently, Anthropic has launched an updated version of its AI model, Claude Instant 1.2 which builds upon its latest Claude 2.0 foundation. This enhanced model contains remarkable improvements in coding, math, reasoning, and even safety while producing longer and more coherent responses. When conducting its benchmark tests against established frameworks, Claude Instant 1.2 outperformed its predecessors. The model also displayed reduced mistakes and increased security features, making it a notable contender in the AI chatbot space.
Google has unveiled Project IDX, an AI-enhanced browser-based development platform for creating full-stack web and multiplatform applications. The platform currently supports JavaScript and Dart with more language support incoming. Notably, Google chose to utilize the open-source Visual Studio Code as IDX's foundation rather than building a new integrated development environment (IDE). This allowed Google to build some key features such as its integration with Codey (Google's foundational model which facilitates smart code completion), a chatbot for coding inquiries, and contextual code actions such as “add comments”. Positioned as a cloud-based IDE, Project IDX naturally integrates with Google Firebase Hosting and Google Cloud Functions and even supports importing code from GitHub repositories. Though the platform shows promise, early tests reveal the IDX chatbot lacks full integration with source code. With Project IDX still in its infancy, Google aims to enhance its capabilities over time.
OpenAI’s potential once seemed limitless but as of recent, it’s becoming increasingly more clouded as the company grapples with a variety of challenges. Following their trademark attempt for “GPT”, there’s been a declining number of users for ChatGPT potentially due to the release of their API which allowed users to build their own model instead of using ChatGPT. Additionally, the competition in the AI space is becoming more intense with rising open-source models such as Meta’s Llama 2 as appealing alternatives. Despite switching to a profit-focused model and receiving substantial backing from Microsoft, Open AI’s financial losses have been mounting up with it totaling $540 million in May, which places further doubt about their ambitious revenue targets. Their path to profitability is further stricken due to GPU shortages which affects model training, suggesting a potential quality decline. With all this in mind, if OpenAI doesn’t receive additional funding soon, they may be heading towards bankruptcy by the end of 2024.
IBM Research has introduced an innovative analog AI chip, designed to boost efficiency and precision in deep neural network (DNN) computations. Traditional digital architectures often face performance and energy efficiency hurdles due to continuous data transfers between memory and processors. IBM's solution leans into analog AI, mirroring the operations of biological neural networks using nanoscale resistive memory devices called Phase-change memory (PCM). These devices reduce the need for data transfers by executing computations directly in memory. IBM's new chip comprises 64 analog in-memory compute cores, each containing synaptic unit cells and mechanisms for transitioning between analog and digital states. Tests showed an impressive 92.81% accuracy on the CIFAR-10 image dataset, a record for analog AI chips. The chip also outperformed its predecessors in computing efficiency, signaling a pivotal advancement in energy-efficient AI computation.
The UK government is investing £13 million in advanced AI research for healthcare, funding 22 projects across various universities and NHS trusts in order to prompt innovation and improve patient care. Key initiatives include a semi-autonomous surgical robotics platform at the University College London for better brain tumor removal, a system at Heriot-Watt University for real-time feedback during laparoscopy training, and a project at the University of Surrey focusing on enhancing mammogram analysis. AI's capabilities in healthcare could be largely beneficial with a range from genetic code analysis and robot-assisted surgeries to predictive analytics for disease spread. As the UK grapples with record-high NHS waiting lists, AI's potential to expedite and improve diagnoses and treatments is spotlighted. In addition, the UK, a leading force in AI in Europe, is gearing up to host an international summit on AI safety, emphasizing its dedication to the ethical evolution of AI technology.
📬 Receive our amazing posts straight to your inbox. Get the latest news, company insights, and Niural updates.
The idea of using AI in wildlife conservation was put to the test when researchers in August 2023 began using AI-controlled cameras and microphones to monitor British wildlife. These devices were designed to identify and map the locations of various species by capturing their sounds and images while not utilizing the use of humans at all. These tests were done in London and were able to successfully identify dozens of bird species just from their songs and other animals such as foxes and bats were pinpointed using AI analysis. Conservation specialist, Anthony Dancer, stated that the scale of this operation, which involved tens of thousands of data files and extensive hours of audio, would not have been possible without AI technology. As the focus of this innovation wasn't just to survey wildlife on Network Rail land but also to understand species movement in response to climate change, it's evident that AI's role in wildlife conservation can be significant.
LabGenius, a biopharmaceutical company, harnessed the power of AI to revolutionize the engineering of antibodies. Traditionally, the antibody design process was tedious with protein engineers having to manually sift through millions of amino acid combinations to discover effective treatments. However, LabGenius used a method using machine learning to improve this treatment, targeting diseases more specifically. Instead of the usual months of labor required, their AI-driven method can discover unique antibody solutions in just 6 weeks. Additionally, LabGenius’ method starts with a selection of 700 initial antibodies from a vast pool of 100,000 potential candidates. This means that with each testing round of 700 antibodies, the AI’s designs improve by utilizing the information obtained from the previous testing round. LabGenius’ work serves to showcase the potential of AI in drug discovery and hopefully other scientific discoveries.
The weaknesses of AI chatbots were further exposed during the annual Def Con hacker conference in Las Vegas. Thousands of participants (with little to no AI experience) set out to manipulate chatbot responses through words which revealed a surprising number of flaws in leading chatbot systems from tech giants like Google, Facebook's Meta, and OpenAI. One notable instance was when one participant tricked an AI into revealing a confidential credit card number. Similarly, another participant prompted a chatbot to provide detailed spying instructions. This experimentation revealed that while AI chatbots might be designed to mimic human interactions, they can be misled into generating false, harmful, or unauthorized information, often termed "hallucinations". Such loopholes raise concerns about the unintended consequences of AI, especially as it integrates more deeply into everyday life.