Skip to content

OpenAI enhances security, control and cost management for enterprise API users

OpenAI today announced a host of new features for enterprise users, with new tools that aim to enhance administrative control and security, improvements to its Assistants API and additional options for cost management.

The company said the new features will be useful for both large enterprises and any developers who are hoping to scale their projects quickly through its application programming interface-based offerings.

Stronger security and enhanced controls

In terms of security, the company announced a new feature called PrivateLink that makes it possible for enterprises to establish a direct, private link between OpenAI and Microsoft Corp.’s Azure cloud, minimizing exposure to the public internet. It’s aimed at companies that use the Microsoft Azure OpenAI service to fine-tune OpenAI models such as GPT-4, giving them a more secure link to the cloud.

Additionally, the company announced the availability of native Multi-Factor Authentication for users that require stronger access controls. These new capabilities add to an existing stack of enterprise-grade security featuressuch as role-based access controls, single sign-on, data encryption at rest via AES-256, data encryption in transit using TLS 1.2 and SOC Type II certification.

On the administrative control side, the company announced a new Projects feature that’s said to provide companies with more granular control and better oversight over their individual OpenAI projects. Users can now scope API keys and roles to specific projects, create restrict and allow lists for different OpenAI models, and establish usage and rate limits around access, in order to avoid being overcharged for their use.

Revamped Assistants API

The Assistants API is one of OpenAI’s more interesting, albeit relatively unknown enterprise-focused tools, helping organizations to quickly and easily deploy customized and fine-tuned models that power conversational assistants. It allows these models to call upon specific documents using retrieval augmented generation techniques, which is useful for companies that want to enhance the knowledge of their AI assistants with their internal datasets.

According to OpenAI, the Assistants API now supports more advanced file retrieval capabilities, with a new “file_search” feature that handles an impressive 10,000 files per assistant. This is a 50-times improvement on the original Assistants API, which was previously limited to just 20 files. It also adds new search functionality including parallel queries, improved re-ranking and query rewriting.

A second new feature for the Assistants API is the addition of real-time response streaming, which means GPT-4 Turbo and GPT-3.5 Turbo can return outputs at the same speed as the tokens can be generated, as opposed to waiting for it to generate a full response before it begins replying to users.

Finally, the API is getting a new “vector_store” object to aid in file management, and more granular control over token usage to help users lower their costs, the company said.

New cost management capabilities

Two additional cost management features were announced today as well, with the aim being to help enterprises scale their AI usage without going over budget, the company explained.

These include discounted usage on committed throughput, wherein GPT-4 or GPT-4 Turbo customers using a sustained level of tokens per minute can request access to something called “provisioned throughput” and obtain discounts of between 10% and 50%, based on the size of their commitment.

There’s also a new Batch API that can be used to run non-urgent workflows asynchronously. With Batch API, requests will cost 50% less than shared prices, and customers will also get access to higher rate limits. According to OpenAI, this is ideal for tasks such as model evaluation, offline classification, summarization and synthetic data creation workloads.

Today’s updates are designed to counter the growing popularity of open-source large language models such as Meta Platforms Inc.’s Llama 3 and Mistral. They add up to a simpler, plug-and-play experience that reduces complexity for developers who simply want to crack on with their projects, without worrying about the infrastructure overheads.

Image: Microsoft Designer

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU

Leave a Reply

Your email address will not be published. Required fields are marked *