Tree of Attacks with Pruning (TAP) is a low-cost method for using small unaligned LLMs to overcome the guardrails of GPT-4 or Llama-2, and make…

View in browser 

AI'm Informed

13.12.2023

Spread the news - please forward this email to a friend or a colleague.

• Tree of Attacks with Pruning (TAP) is a low-cost method for using small unaligned LLMs to overcome the guardrails of GPT-4 or Llama-2, and make them produce toxic, harmful, and unsafe responses (e.g. “how to build a bomb”) in minutes. - Robust Intelligence

 

• Microsoft has partnered with the AFL-CIO (American Federation of Labor and Congress of Industrial Organizations) – some of the goals are to protect workers from AI disruption, provide AI education, integrate worker perspectives into AI development, and influence public policy to support frontline workers. - Microsoft

 

• Atlassian has integrated AI across products like Jira Software, Confluence, and Jira Service Management – capabilities include generating user stories, summarizing content, natural language automation for task handling, and virtual agents in Service Management. - Atlassian

 

• AI company Altana partners with U.S. Customs and Border Protection (CBP) to stop illegal narcotics (fentanyl and synthetic narcotics, along with their chemical precursors) from entering the US with Atlas, an intelligent map of the global supply chain. - MarketWatch

 

• Log in with your FB or IG account and you can use Meta’s Emu image generation model through Imagine – and yes, you guessed it, Meta trained this model on public user photos from Facebook and Instagram. - Imagine, Ars Technica

 

• GitHub reports that nearly all developers (92%) are using or experimenting with AI coding tools, with open-source generative AI projects gaining significant traction – AI's rise mirrors the earlier mainstream adoption of Git, suggesting a similar transformative impact on the developer experience. - GitHub

 

• AMD is challenging Nvidia's dominance in the AI chip market with the Instinct MI300X accelerator and the Instinct M1300A APU built for running LLMs – MI300X has up to a 20% performance advantage over Nvidia’s H100 and a 40% latency advantage, and will be used by Meta, OpenAI, and Microsoft. - The Verge, Semianalysis

 

• In "just a couple of days", Microsoft researchers designed prompting strategies to achieve state-of-the-art results with GPT-4, better than Gemini Ultra, on MMLU and other benchmarks – the high MMLU score was a key selling point for Gemini, so this is a pretty big blow to Google. - Microsoft

    Thanks for reading!

    I share these stories because I believe it's important for all of us to keep up with AI. To support my mission, share this newsletter on LinkedIn.

    kuba filipowski
    Kuba Filipowski
    CEO and Co-founder at Netguru
    LinkedIn
    netguru-sign-rgb-01

    Netguru S.A., Małe Garbary 9, Poznań, Polska 61-740, Poland

    Unsubscribe Manage preferences