📢 Gate Square #MBG Posting Challenge# is Live— Post for MBG Rewards!
Want a share of 1,000 MBG? Get involved now—show your insights and real participation to become an MBG promoter!
💰 20 top posts will each win 50 MBG!
How to Participate:
1️⃣ Research the MBG project
Share your in-depth views on MBG’s fundamentals, community governance, development goals, and tokenomics, etc.
2️⃣ Join and share your real experience
Take part in MBG activities (CandyDrop, Launchpool, or spot trading), and post your screenshots, earnings, or step-by-step tutorials. Content can include profits, beginner-friendl
Lenovo AI server achieves local deployment of DeepSeek full-blooded large model for the first time, with less than 1TB, supporting 100 concurrency.
On March 3, Jinshi Data reported that recently, Lenovo Group announced that based on the Lenovo Wentian WA7780 G3 server, it has successfully implemented the single-machine deployment of the DeepSeek-R1/V3 671B large model for the first time in the industry, carrying 100 concurrent users with less than the industry-recognized 1TGB memory (actually 768GB) for a smooth experience. According to Lenovo's test data, in a standard test environment with 512 TOKENs, the system can support 100 concurrent users to continuously obtain a stable output of 10 TOKENs per second, compressing the first TOKEN response time to within 30 seconds.