DeepSeek's spring breeze blows everywhere, and everyone is afraid of getting up early and missing the late gathering.
Various large and small factories have launched the full-version DeepSeek R1, with many available for online use. Besides DeepSeek Official, there are also Tencent Yuanbao, Wen Xiaobai, Dangbei AI, etc., which I won't list one by one here.
Everyone knows that DeepSeek R1 is an inference model, different from other chatbots. The inference model displays the reasoning process to the user when answering questions, as shown in the image below:
The spring breeze of DeepSeek has also reached Yong Ge's company. As soon as work resumed after the Spring Festival, the leadership requested the integration of DeepSeek.
The large model for the chatbot project Yong Ge is working on is based on Alibaba Cloud Bailian, and the AI library uses Vercel's AI SDK, with the front-end UI based on Next.js AI Chatbot. After enabling the DeepSeek R1 configuration application on the Bailian platform, it can be called, returning results with additional reasoning chain data, and the front end needs to add a display for the reasoning chain. Yong Ge's first reaction was not to implement it himself but to check on GitHub whether Next.js AI Chatbot had updated its support for reasoning chain display.
Sure enough, Next.js AI Chatbot recently added support for inference models like DeepSeek R1:
The specific file is: components/message-reasoning.tsx
.
Update the AI SDK to the latest version, and add the following code in the original content rendering file message.tsx
, adjusting the styles slightly.
{message.reasoning && (
<MessageReasoning
isLoading={isLoading}
reasoning={message.reasoning}
/>
)}
The final implementation effect is as follows: