The battle against misinformation is only beginning, and the AI industry has some unsolved problems.
View in browser

Greetings,

 

While Big Tech is introducing standards for preventing misinformation, hackers are building ChatGPT alternatives that can code malware and write phishing emails. But what caught me off guard the most is the UN warning against implanting AI chips into our brains. It’s both exciting and a bit scary when science-fiction becomes just science.

  • To combat misinformation, the Coalition for Content Provenance and Authenticity (C2PA) provides specifications for cryptographically certifying the source and authenticity of content and adding a history of changes made to it. - C2PA 

  • McKinsey reveals GenAI will save billions across industries, with the biggest impact on high-wage jobs – forecasts for achieving human-level competencies are now expected much earlier (like language understanding, which went from 2027 to 2023). - McKinsey

  • Meta delivers a new benchmark for open-source model performance with Llama 2 – it was trained on 2 trillion tokens and with 40% more data than Llama 1, and it has double the context length. - Meta

  • This had to happen – WormGPT is a ChatGPT alternative built by hackers, it has no filter, no limitations, it will write malware and phishing emails for you and give you tips on doing illegal stuff. - PCMag

  • If you’re wondering how easy it is to make an LLM do something it’s not supposed to do, play this simple game to see how good you are at prompt injection. - Gandalf

  • Managing context, framing the problem for the model, breaking down complex tasks, using step-by-step prompts, and providing feedback are key challenges when building a LLM-powered product feature. - Earthly

  • There’s a big issue that the industry has no solution for yet – AI cannibalism is when models learn from AI-generated output, which has been shown to decrease the quality of models and make them less useful. - TechRadar 

  • LangChain (framework for LLM-powered apps) has been hyped up a lot – but data scientist Max Woolf says it’s too complicated, not suitable for beginners, and it might be easier to write your own package than use LC. - Max Woolf

  • Meanwhile, LangChain keeps evolving and they’re beta testing a new tool, LangSmith – built to address the challenge of productizing LLM-powered apps, it’s a unified platform for debugging, testing, monitoring, and evaluating outputs. - LangChain

  • Because neurotech is “advancing at warp speed”, the UN is warning against putting AI chips into our brains because they could steal our most private thoughts – so far, people that tried brain implants for medical reasons didn’t like it at all. - Business Insider

 

 

----

Found this helpful? Forward this email to a colleague.

Was this email forwarded to you? Sign up here to stay on top of the AI news.

    kuba filipowski
    Kuba Filipowski
    CEO and Co-founder at Netguru
    LinkedIn
    netguru-sign-rgb-01

    Netguru is a consultancy, product design, and software development company founded in 2020.

      icons_facebook gray
      icons_linkedin gray
      icons_twitter gray
      icons_instagram gray

      Netguru S.A., Małe Garbary 9, Poznań, Polska 61-740, Poland

      Unsubscribe Manage preferences