• Power and Responsibility of Large Language Models | Safety & Ethics | OpenAI Model Spec + RLHF | Anthropic Constitutional AI | Episode 27

  • Jun 17 2024
  • Length: 17 mins
  • Podcast

Power and Responsibility of Large Language Models | Safety & Ethics | OpenAI Model Spec + RLHF | Anthropic Constitutional AI | Episode 27

  • Summary

  • With great power comes great responsibility. How do Open AI, Anthropic, and Meta implement safety and ethics? As large language models (LLMs) get larger, the potential for using them for nefarious purposes looms larger as well. Anthropic uses Constitutional AI, while OpenAI uses a model spec, combined with RLHF (Reinforcement Learning from Human Feedback). Not to be confused with ROFL (Rolling On the Floor Laughing). Tune into this episode to learn how leading AI companies use their Spidey powers to maximize usefulness and harmlessness.

    REFERENCE

    OpenAI Model Spec

    https://cdn.openai.com/spec/model-spec-2024-05-08.html#overview

    Anthropic Constitutional AI

    https://www.anthropic.com/news/claudes-constitution



    For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.

    Show More Show Less
activate_samplebutton_t1

What listeners say about Power and Responsibility of Large Language Models | Safety & Ethics | OpenAI Model Spec + RLHF | Anthropic Constitutional AI | Episode 27

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.