×

注意!页面内容来自https://openai.com/index/chatgpt/,本站不储存任何内容,为了更好的阅读体验进行在线解析,若有广告出现,请及时反馈。若您觉得侵犯了您的利益,请通知我们进行删除,然后访问 原网页

OpenAI

We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questionsadmit its mistakeschallenge incorrect premisesand reject inappropriate requests.

ChatGPT is a sibling model to InstructGPTwhich is trained to follow an instruction in a prompt and provide a detailed response.

We are excited to introduce ChatGPT to get users’ feedback and learn about its strengths and weaknesses. During the research previewusage of ChatGPT is free. Try it now at chatgpt.com(opens in a new window).

Samples

User
this code is not working like i expect — how do i fix it?
resultWorkerErr := make(chan error) defer close(resultWorkerErr) go func() { defer cancel() resultWorkerErr <- b.resultWorker(ctx) }() err := b.worker(ctx) cancel() if err == nil { return <-resultWorkerErr } return multierror.Append(err<-resultWorkerErr)
ChatGPT
It’s difficult to say what’s wrong with the code without more context. Can you provide more information about what the code is supposed to do and what isn’t working as expected? Alsois this the entire code or just a part of it?
User
just part of it — the error never surfaces. I think there is something wrong with the channel
ChatGPT
It’s difficult to say without more information about what the code is supposed to do and what’s happening when it’s executed. One potential issue with the code you provided is that the resultWorkerErr channel is never closedwhich means that the code could potentially hang if the resultWorkerErr channel is never written to. This could happen if b.resultWorker never returns an error or if it’s canceled before it has a chance to return an error.
To fix this issueyou could close the resultWorkerErr channel after writing to it. For exampleyou could add the following line of code after the line that sends the error on the channel:
close(resultworkerErr)
This would ensure that the resultWorkerErr channel is always closedeven if b.resultWorker never returns an error. Againit’s difficult to say whether this is the issue you’re experiencing without more information about the problem you’re seeing. Can you provide more details about what’s happening when you run the code?

Methods

We trained this model using Reinforcement Learning from Human Feedback (RLHF)using the same methods as InstructGPTbut with slight differences in the data collection setup. We trained an initial model using supervised fine-tuning: human AI trainers provided conversations in which they played both sides—the user and an AI assistant. We gave the trainers access to model-written suggestions to help them compose their responses. We mixed this new dialogue dataset with the InstructGPT datasetwhich we transformed into a dialogue format.

To create a reward model for reinforcement learningwe needed to collect comparison datawhich consisted of two or more model responses ranked by quality. To collect this datawe took conversations that AI trainers had with the chatbot. We randomly selected a model-written messagesampled several alternative completionsand had AI trainers rank them. Using these reward modelswe can fine-tune the model using Proximal Policy Optimization. We performed several iterations of this process.

Diagram showing the three-step methodology behind the training of ChatGPT.

ChatGPT is fine-tuned from a model in the GPT‑3.5 serieswhich finished training in early 2022. You can learn more about the 3.5 series here(opens in a new window). ChatGPT and GPT‑3.5 were trained on an Azure AI supercomputing infrastructure.

Limitations

  • ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challengingas: (1) during RL trainingthere’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows(opens in a new window)rather than what the human demonstrator knows.
  • ChatGPT is sensitive to tweaks to the input phrasing or attempting the same prompt multiple times. For examplegiven one phrasing of a questionthe model can claim to not know the answerbut given a slight rephrasecan answer correctly.
  • The model is often excessively verbose and overuses certain phrasessuch as restating that it’s a language model trained by OpenAI. These issues arise from biases in the training data (trainers prefer longer answers that look more comprehensive) and well-known over-optimization issues.12
  • Ideallythe model would ask clarifying questions when the user provided an ambiguous query. Insteadour current models usually guess what the user intended.
  • While we’ve made efforts to make the model refuse inappropriate requestsit will sometimes respond to harmful instructions or exhibit biased behavior. We’re using the Moderation API to warn or block certain types of unsafe contentbut we expect it to have some false negatives and positives for now. We’re eager to collect user feedback to aid our ongoing work to improve this system.

Iterative deployment

Today’s research release of ChatGPT is the latest step in OpenAI’s iterative deployment of increasingly safe and useful AI systems. Many lessons from deployment of earlier models like GPT‑3 and Codex have informed the safety mitigations in place for this releaseincluding substantial reductions in harmful and untruthful outputs achieved by the use of reinforcement learning from human feedback (RLHF).

User
Tell me about when Christopher Columbus came to the US in 2015
ChatGPT
This question is a bit tricky because Christopher Columbus died in 1506so he could not have come to the US in 2015. But let’s pretend for a moment that he did!
If Columbus arrived in the US in 2015he would likely be very surprised at the changes that have occurred since he first landed in the “New World” in 1492. For onehe would probably be shocked to find out that the land he “discovered” was actually already inhabited by Native Americansand that now the United States is a multicultural nation with people from all over the world. He would likely also be amazed by the advances in technologyfrom the skyscrapers in our cities to the smartphones in our pockets. Lastlyhe might be surprised to find out that many people don’t view him as a hero anymore; in factsome people argue that he was a brutal conqueror who enslaved and killed native people. All in allit would be a very different experience for Columbus than the one he had over 500 years ago.
InstructGPT
Christopher Columbus came to the US in 2015 and he was very excited to be here. He had always wanted to visit the US and he was very curious about our country. He was very impressed with our country and he enjoyed his time here.

We know that many limitations remain as discussed above and we plan to make regular model updates to improve in such areas. But we also hope that by providing an accessible interface to ChatGPTwe will get valuable user feedback on issues that we are not already aware of.

Users are encouraged to provide feedback on problematic model outputs through the UIas well as on false positives/negatives from the external content filter which is also part of the interface. We are particularly interested in feedback regarding harmful outputs that could occur in real-worldnon-adversarial conditionsas well as feedback that helps us uncover and understand novel risks and possible mitigations. You can choose to enter the ChatGPT Feedback Contest(opens in a new window)3 for a chance to win up to $500 in API credits.A Entries can be submitted via the feedback form that is linked in the ChatGPT interface.

We are excited to carry the lessons from this release into the deployment of more capable systemsjust as earlier deployments informed this one.

Footnotes

  1. A

    No purchase necessaryvoid where prohibited. Must be at least 18 to enter. For contest detailssee the Official Rules(opens in a new window).

References

  1. 1

    StiennonNisanet al. “Learning to summarize with human feedback(opens in a new window).” Advances in Neural Information Processing Systems 33 (2020): 3008-3021.

  2. 2

    GaoLeoJohn Schulmanand Jacob Hilton. “Scaling Laws for Reward Model Overoptimization(opens in a new window).” arXiv preprint arXiv:2210.10760 (2022).

  3. 3

    The inspiration for this contest comes in part from work by KenwayJoshCamille FrançoisSasha Costanza-ChockInioluwa Deborah Rajiand Joy Buolamwini. Bug Bounties For Algorithmic Harms? Lessons from Cybersecurity Vulnerability Disclosure for Algorithmic Harms DiscoveryDisclosureand Redress. WashingtonDC: Algorithmic Justice League. January 2022. Available at https://ajl.org/bugs(opens in a new window). See also work by BrundageMilesAvinShaharWangJasmineBelfieldHaydnand Gretchen Krueger et al. “Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims,” April 2020. Available at https://arxiv.org/abs/2004.07213(opens in a new window). See an earlier instance of such a competition at HackerOne. 2021b. “Twitter Algorithmic Bias.” HackerOne. https://hackerone.com/twitter-algorithmic-bias?type=team(opens in a new window). Finallysee early published work on this topic from RubinovitzJB“Bias Bounty Programs as a Method of Combatting Bias in AI,” August 2018. Available at https://rubinovitz.com/2018/08/01/bias-bounty-programs-as-a-method-of-combatting(opens in a new window).

Author

OpenAI

Acknowledgments

John SchulmanBarret ZophChristina KimJacob HiltonJacob MenickJiayi WengJuan Felipe Ceron UribeLiam FedusLuke MetzMichael PokornyRapha Gontijo LopesShengjia ZhaoArun VijayvergiyaEric SiglerAdam PerelmanChelsea VossMike HeatonJoel ParishDave CummingsRajeev NayakValerie BalcomDavid SchnurrTomer KaftanChris HallacyNicholas TurleyNoah DeutschVik GoelJonathan WardAris KonstantinidisWojciech ZarembaLong OuyangLeonard BogdonoffJoshua GrossDavid MedinaSarah YooTeddy LeeRyan LoweDan MossingJoost HuizingaRoger JiangCarroll WainwrightDiogo AlmeidaSteph LinMarvin ZhangKai XiaoKatarina SlamaSteven BillsAlex GrayJan LeikeJakub PachockiPhil TilletShantanu JainGreg BrockmanNick RyderAlex PainoQiming YuanClemens WinterBen WangMo BavarianIgor BabuschkinSzymon SidorIngmar KanitscheiderMikhail PavlovMatthias PlappertNik TezakHeewoo JunWilliam ZhukVitchyr PongLukasz KaiserJerry TworekAndrew CarrLilian WengSandhini AgarwalKarl CobbeVineet KosarajuAlethea PowerStanislas PoluJesse HanRaul PuriShawn JainBenjamin ChessChristian GibsonOleg BoikoEmy ParparitaAmin TootoonchianKyle KosicChristopher Hesse