What Is It & What Can It Do?

GPT-4 is an artificial intelligence large language model system that can mimic human-like speech and reasoning.

The AI processes text-based tasks, such as writing, summarizing, and answering questions, with improved reasoning and conversational abilities. The technology builds on the capabilities of GPT-3, using larger data sets for enhanced accuracy and fluency.

This cheat sheet explores GPT-4 from a high level: how to access GPT-4 for either consumer or business use, who made it, and how it works.

What is GPT-4?

GPT-4 is a large multimodal model that can mimic prose, art, video, or audio produced by a human. GPT-4 is able to solve written problems or generate original text or images. GPT-4 is the fourth generation of OpenAI’s foundation model.

Who owns GPT-4?

GPT-4 is owned by OpenAI, an independent tech company based in San Francisco. Founded in 2015, OpenAI started as a nonprofit but has since shifted to a for-profit model. OpenAI has received funding from Elon Musk, Microsoft, Amazon Web Services, Infosys, and other corporate and individual backers.

OpenAI has also produced ChatGPT, a free-to-use chatbot spun out of the previous generation model, GPT-3.5, and DALL-E, an image-generating deep learning model. As the technology improves and grows in its capabilities, OpenAI reveals less about how its AI solutions are trained.

When was GPT-4 released?

OpenAI announced its release of GPT-4 on March 14, 2023. It was immediately available for ChatGPT Plus subscribers, while other interested users needed to join a waitlist for access.

SEE: Salesforce looped generative AI into its sales and field service products.

How can you access GPT-4?

The public version of GPT-4 is available at the ChatGPT portal site.

OpenAI noted that this access may be slow, as they expect to be “severely capacity constrained.” They plan to release a new subscription level for people who use GPT-4 often and a free GPT-4 access portal with a limited number of allowable queries. No information has been released yet about when these might become available.

How much does GPT-4 cost to use?

For an individual, the ChatGPT Plus subscription costs $20 per month to use.

Enterprise customers wanting to use the GPT-4 API can join the waitlist. Access is limited; as of now, OpenAI has given only one company — the accessibility software group, Be My Eyes — partner access to its visual capabilities.

Pricing for the text-only GPT-4 API starts at $0.03 per 1k prompt tokens (one token is about four characters in English) and $0.06 per 1k completion (output) tokens, OpenAI said. (OpenAI explains more about how tokens are counted here.)

A second option with greater context length — about 50 pages of text — known as gpt-4-32k is also available. This option costs $0.06 per 1K prompt tokens, and $0.12 per 1k completion tokens.

Other AI assistance services like GitHub’s Copilot X now integrate GPT-4.

Capabilities of GPT-4

Like its predecessor, GPT-3.5, GPT-4’s main draw is its output in response to natural language questions and other prompts.

OpenAI says GPT-4 can “follow complex instructions in natural language and solve difficult problems with accuracy.” Specifically, GPT-4 can solve math problems, answer questions, make inferences, or tell stories. In addition, GPT-4 can summarize large chunks of content, useful for either consumer reference or business use cases, such as a nurse summarizing the results of their visit to a client.

OpenAI tested GPT-4’s ability to repeat information in a coherent order using several skills assessments, including AP and Olympiad exams and the Uniform Bar Examination. It scored in the 90th percentile on the Bar Exam, and the 93rd percentile on the SAT Evidence-Based Reading & Writing exam. GPT-4 earned varying scores on AP exams.

These results are not true indicators of knowledge. Instead, running GPT-4 through standardized tests shows the model’s ability to form correct-sounding answers from preexisting writing and art on which it was trained.

While OpenAI is closed-mouthed about the specifics of GPT-4’s training, LLMs are typically trained by first translating information in a dataset into tokens. The dataset is then cleaned to remove garbled or repetitive data.

Next, AI companies typically employ people to apply reinforcement learning to the model, nudging the model toward responses that make common sense. The weights, or the parameters that tell the AI which concepts are related to each other, may be adjusted in this stage.

What is Bing Chat?

Microsoft’s Bing Chat is an AI assistant deployed as a sidebar alongside the search engine Bing. Users can ask it to answer questions or generate images. Bing Chat runs on GPT-4. It distinguishes itself from ChatGPT by its ability to remember previous conversations, although there are instances where this functionality doesn’t match user expectations.

SEE: How to query Bing Chat to get the results you want (TechRepublic)

Bing Chat requires a Microsoft and the Edge browser to use.

In addition, Microsoft offers Bing Chat Enterprise, which adds data protections and additional functionality to Bing Chat. Admins can provide managed access to Bing Chat Enterprise through Microsoft Entra ID (Azure Active Directory).

Limitations of GPT-4 for business

Like other AI tools of its ilk, GPT-4 has limitations.

For example, GPT-4 does not check if its statements are accurate. Its training on text and images from throughout the internet can make its responses nonsensical or inflammatory. However, OpenAI has digital controls and human trainers to try to keep the output as useful and business-appropriate as possible.

Additionally, GPT-4 tends to create “hallucinations,” or inaccuracies. Its words may make sense in sequence, since they’re based on probabilities established by what the system was trained on, but they aren’t fact-checked or directly connected to real events. Confirmation bias can occur. OpenAI is working on reducing the number of falsehoods the model produces.

Another major limitation is the question of whether sensitive corporate information that’s fed into GPT-4 will be used to train the model and expose that data to external parties. Microsoft, which has a resale deal with OpenAI, plans to offer private ChatGPT instances to corporations later in the second quarter of 2023, according to an April report.

Neither model incorporates information more recent than September 2021. One of GPT-4’s competitors, Google Gemini, does have up-to-the-minute information because it is trained on the contemporary internet.

AI can suffer model collapse when trained on AI-created data. This problem is becoming more common as AI models proliferate.

GPT-4 vs GPT-3.5 Turbo or OpenAI o1

As of October, OpenAI has added options and enhanced models in the GPT-4 family. GPT-4 users can now choose from the larger GPT-4 Turbo or smaller GPT-4o and GPT-4o mini. The public, free version of ChatGPT uses ChatGPT-4o mini.

SEE: Learn how to use ChatGPT.

GPT-4 can handle images, highlighting a significant difference between GPT-4 and GPT-3.5 Turbo. It can serve as a visual aid, describing objects in the real world or determining the most important elements of a website and describing them.

“Over a range of domains — including documents with text and photographs, diagrams or screenshots — GPT-4 exhibits similar capabilities as it does on text-only inputs,” OpenAI wrote in its GPT-4 documentation.

Meanwhile, OpenAI o1 specializes in digesting complex queries slowly, producing complex outputs.

Is upgrading to GPT-4 worth it?

Whether the new capabilities offered through GPT-4 are appropriate for your business depends on your use cases and whether you have found success with natural language AI.

Review the capabilities and limitations of the AI, and consider where GPT-4 might save time or reduce costs. Conversely, consider which tasks might materially benefit from human knowledge, skill, and common sense.

The latest GPT-4 trends

In August 2023, GPT-4 was packaged as part of ChatGPT Enterprise. Users of the business-oriented subscription receive unlimited use of a high-speed pipeline to GPT-4.

Microsoft announced in early August 2023 that GPT-4 availability in Azure OpenAI Service has expanded to several new coverage regions.

Fine-tuning for GPT-4, which allows users to customize models, is expected to be available in the fall, OpenAI said.

Updates from OpenAI DevDay 2024

OpenAI regularly updates the tools it provides for developers. In October, the company released the following:

  • The Realtime API: This feature allows developers to build low-latency, voice-based AI apps. Available in beta to developers in OpenAI’s paid tiers, the Realtime API simplifies the creation of customer assistant bots or other tools with natural-sounding voices, without having to leave the GPT-4o ecosystem. OpenAI also released voice input and output to the Chat Completions API, although it is not low latency.
  • Fine-tune with images: Developers in any paid GPT-4o tier can now fine-tune their versions of the model with images, not just text. This helps train models for tasks such as image recognition and autonomous movement.
  • Model distillation: Developers can “distill” the outputs of larger models such as o1-preview and GPT-4o into smaller models like GPT-4o mini. This helps the smaller models approach the performance of the larger models on certain tasks without inflating the cost proportionally. All developers can use the model distillation suite, which is priced the same as OpenAI’s standard model fine-tuning process.

Additionally, developers working with GPT-4o, GPT-4o mini, o1-preview, or o1-mini will automatically have access to prompt caching. This is a method of reusing tokens that can reduce the cost and latency of some prompts.


Source link
Exit mobile version