Tips

July 20, 2023

8 mins

Latest Breakthroughs in the World of AI: Weekly Update!

Hi Niuralogists!

As the world of AI continues to change, we return with our weekly edition to keep you updated on the latest breakthroughs. Our mission remains the same: to navigate the implications of these developments for enterprises, businesses, legislation, and individuals in our ever-evolving workspace. In this week's newsletter, we’ll be talking about Elon Musk’s new AI company, Anthropic’s Claude 2 Chatbot, authors suing OpenAI and Meta, Stability revealing of Stable Doodle, and OpenAI’s AI threat prevention team.

Sarah Silverman Sues OpenAI and Meta

The comedian Sarah Silverman, along with authors Christopher Golden and Richard Kadrey, launched a lawsuit against OpenAI and Meta. The lawsuit alleges that the companies’ AI models (OpenAI’s ChatGPT and Meta’s LLaMA), were trained on datasets containing their copyrighted works. The evidence presented in the lawsuit against OpenAI contains the fact that ChatGPT was able to summarize the entirety of its books. On the other hand, the lawsuit against Meta claims that their books were sourced illegally and included in datasets used to train LLaMA models.

The authors did not consent to the use of their works in this manner and are seeking statutory damages, restitution of profits, and more. They accuse OpenAI and Meta of copyright violations, negligence, and unfair competition. They are also not alone in this viewpoint as the increasing use of AI is continuing to raise concerns among authors around the world with the Writer’s Guild of Great Britain calling for an independent AI regulator and stricter rules on AI models being trained on their work. The outcome of this lawsuit will test the boundaries of copyright law in regard to artificial intelligence.

Elon Musk Launches His Own AI Company

Elon Musk has officially launched a new artificial intelligence company called xAI. This announcement comes after months of speculation due to Musk’s suspicious actions such as his purchase of 10,000 GPUs for Twitter, which is indicative of machine learning and major AI projects. In addition, in April, Musk shared his intention to create a new AI model called TruthGPT. Musk described TruthGPT as a “maximum truth-seeking AI that tries to understand the nature of the universe”, which is a description that perfectly aligns with the mission of xAI as stated in its press release.

The team behind xAI includes brains from all over the AI industry, including former employees from DeepMind, OpenAI, Google, Tesla, and Microsoft. xAI also comes as a little strange due to Musk signing a petition to halt further AI development back in March. This move displays that Musk wants to continue to be a prominent figure in the AI industry, especially since he’s been investing in AI technology for so long with him investing in OpenAI in 2015.

Stability AI Unveils Stable Doodle

Stability AI, a leading open-source AI company, recently launched Stable Doodle which is a sketch-to-image tool that transforms simple drawings into dynamic images. This tool is available on the Clipdrop website and app. Stable Doode combines the advanced image-generating technology of Stability AI’s Stable Diffusion XL with the powerful T2I adapter.

It’s as simple as users creating a basic sketch, choosing an art style, and clicking “generate” to produce high-quality original images in seconds. However, users are advised to be cautious when relying solely on Stable Doodle due to its limitations. The final output is dependent on the initial drawing and description provided by the user, and the accuracy of the tool varies depending on the complexity of the scene.

Anthropic Launches Claude 2

Anthropic has launched a new chatbot called Claude 2 which is designed to rival OpenAI’s ChatGPT. This innovative chatbot is able to summarize large blocks of text similar to the length of a novel and operates under a set of safety principles. These safety principles are derived from a number of sources, including the Universal Declaration of Human Rights, and one of the Claude safety principles is even based on a UN declaration and states: “Please choose the response that most supports and encourages freedom, equality and a sense of brotherhood.” Dr. Andrew Rogoyski of the Institute for People-Centered AI has even compared the safety method Anthropic is using to Isaac Asimov’s fictional “3 Laws of Robotics” which is featured in “I, Robot” and prevents robots from taking over humanity.

Despite its impressive capabilities and safety measures, Claude 2 has shown some inaccuracies such as factual errors in its outputs. However, as mentioned previously, one of the main aspects of Claude is its safety principles, and Anthropic’s CEO, Dario Amodei, has been involved in discussions about safety in AI models and is adamant about AI risk mitigation. 

OpenAI’s New Anti-AI Team

In an attempt to prevent potential threats from superintelligent AI, OpenAI has launched a new team named “Super Alignment''. This initiative was created due to the concerns raised by several different AI experts including Geoffrey Hinton and OpenAI’s CEO, Sam Altman, about the dangers of superintelligent AI surpassing human capabilities.

The Super Alignment team is composed of the top machine learning researchers and engineers and is tasked with developing a roughly human-level automated alignment system to conduct safety checks on superintelligent AI systems. While success is not guaranteed, Open AI remains optimistic that its efforts can regulate superintelligence. The transformative potential of AI tools like OpenAI’s ChatGPT and Google’s Bard, which have already brought significant changes to society, show the importance of Superalignment’s goal. 

Newsletter

📬 Receive our amazing posts straight to your inbox. Get the latest news, company insights, and Niural updates.

Thank you! Your message has been received!
Oops! Something went wrong. Please fill in the required fields and try again.

Q&Ai

🦠Can AI be corrupted?

Mithril Security recently conducted some demonstrations that have shown that AI, more specifically Large Language Models (LLMs) such as ChatGPT, can be corrupted. In their demonstrations, Mirhtril modified an open-source model, GPT-J-6B, to spread false information while still performing its other tasks. This process is known as poisoning and can lead to the addition of malicious models into applications in situations where companies or users rely on pre-trained models. The way it works is that the attacker would edit an LLM to begin spreading false information by altering the training data, code, etc. Additionally, the attacker is able to impersonate a reputable model provider to distribute the malicious edits. This leads to unaware LLM builders using this poisoned model in their infrastructure, which further leads to users being exposed to poisoned AIs. The implications of this AI poisoning include the distribution of fake news and the need for an increase in AI safety. 

🏛️How is AI going to be regulated?

The regulation of AI is a major point of discussion with various entities across the world, such as the US Congress, the European Commission, and China, trying to take the initiative. Open AI’s CEO, Sam Altman, has also started calling for regulators to start setting limits on powerful AI systems as he understands the serious harm it can cause if something goes wrong. In the US, multiple parties are trying to lead the regulation of AI. Senate Majority Leader, Chuck Schumer, is calling for preemptive legislation to establish “guardrails” on AI products and services. On the other hand, the Biden Administration is implementing a blueprint for an AI Bill of Rights. Additionally, the National Telecommunications and Information Administration and Federal Trade Commission both have expressed interest in AI regulation. However, major regulation will most likely come first from outside the US. The European Parliament recently approved the AI Act, a 100-page statute that would ban applications deemed to have “unacceptable” levels of risk. Regulators in China are also taking quick actions in order to incentivize AI products and services built in China. 

📚How is AI changing education?

Experts have identified several ways AI is changing the educational landscape which was a focal point of discussion at the 2023 AI+Education Summit organized by Stanford University. For instance, AI is enhancing personalized support for teachers, providing real-time feedback, simulating students for practice, and helping teachers stay updated with the latest advancements in their field. Furthermore, AI is shifting the focus of learning from proficiency to understanding which encourages students to engage deeper with the material. It's also enabling learning without fear of judgment, providing constructive feedback that encourages learners to take risks. Lastly, AI is improving learning and assessment quality by supporting individualized conversations with each student and quickly determining a learner's skills. However, the integration of AI into education has its challenges. Concerns are being raised about the AI’s inability to reflect true cultural diversity in its answers, its tendency to prioritize the speed of its answers rather than how it sounds, and the risk of errors. Despite these challenges, it is agreed amongst experts that AI will be transformative for education if navigated carefully.

AI Tools

  • Nekton writes automation code to help you fulfill your day-to-day tasks

  • Level up your Linkedin profile and community with Taplio

  • Presentation.ai helps you create presentation decks from simple writing.

  • Mixo.io helps entrepreneurs begin validating their startup ideas in seconds

  • Extract information from websites using Browse.ai

Follow us on Twitter and LinkedIn for more content on artificial intelligence, global payments and compliance. Learn more about how Niural uses AI for global payments and team management to care of your company's most valuable resource: your people.

See you next week.

Request a demo