OpenAI on Wednesday announced two reasoning models that developers can use to classify a range of online safety harms on their platforms.
The artificial intelligence models are called gpt-oss-safeguard-120b and gpt-oss-safeguard-20b, and their names reflect their sizes. They are fine-tuned, or adapted, versions of OpenAI's gpt-oss models , which the company announced in August.
OpenAI is introducing them as so-called open-weight models, which means their parameters, or the elements that improve the outputs and predictions during training, are publicly available. Open-weight models can offer transparency and control, but they are different from open-source models, whose full source code becomes available for users to customize and modify.
Organizations can configure the new models t

CNBC
America News
CBS News
Nicki Swift
Raw Story