•
July 27, 2023
•
9 mins
Hello Niuralogists!
With the tech industry constantly releasing new AI developments, we return with our weekly edition to keep you up to date on the latest breakthroughs. Our mission is to help you understand the impact of new developments regarding AI for businesses, legislation, and individuals in our ever-evolving workspace. In this week's newsletter, we’ll be talking about OpenAI releasing their new personalization feature, a new AI salesperson, Apple’s very own Chatbot, OpenAI’s head of safety and trust stepping down, and Meta releasing Llama 2.
In a significant development, Dave Willner, the Head of Trust and Safety at OpenAI, has announced his decision to step down from his role and transition into an advisory position as he wants to spend more time with his family. Willner’s departure comes after a year and a half of working for OpenAI. He states that the role had become particularly intense after the launch of ChatGPT. OpenAI is currently in the process of finding a replacement for Willner, with CTO Mira Murati stepping in to manage the team temporarily.
Willner stepping down comes at a critical time for artificial intelligence as the industry is heating up with questions on how to regulate AI activity and companies in the field. Willner’s extensive career in the field of trust and safety, working for companies such as Facebook and Airbnb, is another reason why his stepping down is a big deal. Nonetheless, OpenAI expressed its gratitude for Willner’s valuable contributions and acknowledged his work as foundational to their safekeeping and responsible use of their technology. Willner’s departure marks the beginning of a new chapter for OpenAI, as they continue to navigate the challenges of AI regulation and development.
Meta, in collaboration with Microsoft, recently announced the release of Llama 2, the next generation of their open-source Large Language Model (LLM). This release displays a significant step in Meta’s commitment to open-source AI due to the fact that it is the first LLM that is freely available for anyone to use. Llama 2 is also seen as a potential competitor to OpenAI’s GPT-4. While there is still a performance gap between these two models, Llama 2’s open-source nature makes it a popular choice for many cases.
The release of Llama 2 also includes an array of different-sized AI models and a version that can be built into a chatbot similar to ChatGPT. Despite concerns about the potential for the model to produce false or offensive language, Meta has implemented a range of machine-learning techniques to improve its safety and helpfulness. The open-source nature of Llama 2 also allows external researchers and developers to probe it for security flaws. This release is seen as a significant moment for Meta and the broader AI community.
The new AI technology known as AIR is designed to handle full 30-40 minute phone conversations and is extremely good at imitating a human salesperson. This AI-powered system can perform actions across more than 5000 sales applications, eliminating the need for training, management, or even motivation. As a result, it is able to operate around the clock and becomes a reliable salesperson that can also handle a variety of tasks. Its most impressive feature is its ability to stay on call for up to 40 minutes and deliver a fully human experience that leaves customers astonished.
This is made possible by its autonomous capabilities that allow it to navigate diverse applications which ensures seamless interactions no matter the platform. Additionally, its sophisticated algorithms and deep learning capabilities allow AIR to grasp the intricacies of human conversation, adapt to various customer scenarios, and provide accurate and compelling responses. Its artificial intelligence is continuously learning, enabling it to optimize its performance and enhance customer engagement over time. Over 50,000 businesses have already expressed interest in beta testing AIR with the setup process being incredibly fast and similar to setting up a Facebook ad campaign. In a matter of minutes, businesses can have their AI agent live and actively engage in calls, marking a significant step forward in the application of AI in the sales industry.
Apple has finally entered the AI Chatbot world with a tool internally known as “AppleGPT” which is inspired by OpenAI’s name for its chatbot, ChatGPT. The AI chatbot is built on foundational models created by a framework called Ajax, which runs on Google Cloud.
Currently, the tool is only available for internal use by Apple employees, with its public release date still uncertain due to security reasons. Apple’s CEO, Tim Cook, has expressed concerns about the security issues that come with generative AI, stating that these need to be resolved before their technology can be widely adopted. Despite this, Apple already uses AI across its software on all its devices in the form of machine learning, with Siri being a prime example. However, the company is still strategizing on how to effectively implement generative AI before launching its first AI chatbot.
OpenAI has released a new feature for its language model, ChatGPT, that allows the user to set custom instructions. This feature enhances the personalization of each interaction by allowing the user to specify the context for ChatGPT’s responses. Users with access to the feature are provided with two boxes: one for the user to specify what they want ChatGPT to know before responding and the other for how they want to adjust ChatGPT’s final response. For example, a developer can indicate their preferred coding language or a user can specify their family size for a more tailored meal planning suggestion. This customization feature also works with ChatGPT’s plugins.
An example of this would be a user inputting in their location before using a restaurant plugin which would then suggest restaurants based on a user’s locations. This feature is only available for Plus plan users with the exception of EU and UK users due to regulatory requirements. Importantly, the information provided by users for customizing responses will be used to train OpenAI's API models to adapt to different instructions, enhancing the model's ability to provide more accurate and relevant responses. However, users that are concerned about privacy have the option to opt out of this setting.
📬 Receive our amazing posts straight to your inbox. Get the latest news, company insights, and Niural updates.
The debate over AI’s ability to outperform journalists in writing has been stirred by Google’s unveiling of its news-writing AI tool called Genesis. However, industry experts and journalists say that AI has not yet surpassed human journalism in writing quality. Despite Genesis’ impressive capabilities to gather information and generate news articles, it’s seen more as a supplemental tool rather than a replacement for human-written articles. Additionally, writers are skeptical due to the risk of outputting false information and AI’s history of generating content that contains errors. While AI has made great improvements in automating stories, it still has a long way to go before it surpasses human journalists in writing quality, especially due to the fact human journalism includes ethical considerations and the human touch. In conclusion, it remains that AI is a tool to assist journalists, not replace them.
AI has been known to be able to write code but developers have always been skeptical of the quality. Recently, a tool specifically designed to help developers write code has been developed and it's called Tabnine Chat. This tool can generate code, answer questions within the same integrated development environment (IDE), identify coding issues, suggest improvements, and even enforce coding practices. It can also enhance security by allowing organizations to run permissive code only. Eran Yahav, the CTO of Tabnine, emphasizes the importance of trust in these tools and asserts that AI tools like Tabnine Chat are part of their commitment to delivering ethical and secure AI throughout the software development lifecycle. While the potential of AI generating and helping develop code is evident, many challenges still remain including the need for continuous adapting to evolving coding practices, maintaining the balance between automation and human creativity, and ensuring the ethical use of AI in coding.
Lately, AI has been affecting dozens of industries but what about space? Well as it turns out AI and machine learning have shown promising results in automating engine operations in a spacecraft, predicting cosmic events, and even mapping out the universe. For instance, SpaceX's Falcon 9 craft uses an AI autopilot system for autonomous operations, and NASA's Kepler telescope employs AI to identify potential locations of planets. The Mars Rover is equipped with machine learning algorithms and can autonomously navigate the Martian terrain while avoiding potential hazards. Additionally, AI has been instrumental in constructing the most accurate images of black holes, a feat that earned Roger Penrose, Reinhard Genzel, and Andrea Ghez a Nobel Prize in 2020. Scientists are currently working on using AI to reveal what lies within a black hole’s interior which is a task that would solve one of the greatest problems in physics: unifying Einstein’s general theory of relativity with the Standard Model of particle physics. Furthermore, AI is expected to play a crucial role in measuring the universe and gaining a better understanding of its size and shape. However, the ability to use a fully automated AI for space exploration is not quite there yet due to the requirement for advanced decision-making capabilities in real-time. Despite this, the potential benefits of AI in space exploration are immense and the ongoing research in this field will likely yield significant advancements.