Friday, February 28, 2025

My Weight Statistics (2025-02-28) Monthly Weight Measurement on the 28th of Each Month Since 28 May 2007

Learn DeepSeek-R1 in 30 Minutes: Watch BEFORE It's TOO LATE!

My 17-year Weight Management Records from 2007-05-28 to 2025-02-28 (by Calorie Restriction, i.e. Dietary Energy Restriction):


My 17-year Weight Management Records from 2007-05-28 to 2025-02-28 (by Calorie Restriction, i.e. Dietary Energy Restriction):

Note: According to the Singapore Health Promotion Board, a Healthy BMI is greater than18.5 and less than 23.0. A BMI less than 18.5 would mean that the individual is at risk of nutrition deficiency diseases and osteoporosis. 

A BMI equal or greater than 23.0 would mean that the individual is at risk of obesity-related diseases. (Ref: DD-Md2022J28)

As of 2025-02-28,

Note: ### indicates BMI = 23 or > 23

Total number of Monthly Weight monitored was 213 (100%)

The no. of times my healthy BMI between 18.5 and 22.9 was 208 (97.652%)

The no. of times my unhealthy BMI equal or more than 23.000 was 5 (2.348%)

=======================

2007

2007-05-28 morning, my weight = 65.0 kg, BMI = 23.588###

2007-06-28 morning, my weight = 61.0 kg, BMI = 22.136

2007-07-28 morning, my weight = 59.0 kg, BMI = 21.410

2007-08-28 morning, my weight = 58.7 kg, BMI = 21.302

2007-09-28 morning, my weight = 57.5 kg, BMI = 20.866

2007-10-28 morning, my weight = 57.5 kg, BMI = 20.866

2007-11-28 morning, my weight = 56.2 kg, BMI = 20.394

2007-12-28 morning, my weight = 55.5 kg, BMI = 20.140

2008

2008-01-28 morning, my weight = 54.8 kg, BMI = 19.886

2008-02-28 morning, my weight = 54.8 kg, BMI = 19.886

2008-03-28 morning, my weight = 54.5 kg, BMI = 19.777

2008-04-28 morning, my weight = 54.4 kg, BMI = 19.741

2008-05-28 morning, my weight = 54.1 kg, BMI = 19.632

2008-06-28 morning, my weight = 54.6 kg, BMI = 19.814

2008-07-28 morning, my weight = 54.5 kg, BMI = 19.777

2008-08-28 morning, my weight = 54.3 kg, BMI = 19.705

2008-09-28 morning, my weight = 54.9 kg, BMI = 19.923

2008-10-28 morning, my weight = 55.3 kg, BMI = 20.068

2008-11-28 morning, my weight = 54.5 kg, BMI = 19.777

2008-12-28 morning, my weight = 55.6 kg, BMI = 20.177

2009

2009-01-28 morning, my weight = 54.8 kg, BMI = 19.886

2009-02-28 morning, my weight = 55.9 kg, BMI = 20.285

2009-03-28 morning, my weight = 54.8 kg, BMI = 19.886

2009-04-28 morning, my weight = 55.3 kg, BMI = 20.068

2009-05-28 morning, my weight = 55.4 kg, BMI = 20.104.

2009-06-28 morning, my weight = 55.2 kg, BMI = 20.031

2009-07-28 morning, my weight = 55.1 kg, BMI = 19.995

2009-08-28 morning, my weight = 55.2 kg, BMI = 20.031

2009-09-28 morning, my weight = 56.3 kg, BMI = 20.431

2009-10-28 morning, my weight = 55.8 kg, BMI = 20.249

2009-11-28 morning, my weight = 56.2 kg, BMI = 20.394

2009-12-28 morning, my weight = 56.1 kg, BMI = 20.358

2010

2010-01-28 morning, my weight = 55.6 kg, BMI = 20.177

2010-02-28 morning, my weight = 56.5 kg, BMI = 20.503

2010-03-28 morning, my weight = 56.4 kg, BMI = 20.467

2010-04-28 morning, my weight = 55.7 kg, BMI = 20.213

2010-05-28 morning, my weight = 55.1 kg, BMI = 19.995

2010-06-28 morning, my weight = 56.4 kg, BMI = 20.467

2010-07-28 morning, my weight = 55.5 kg, BMI = 20.140

2010-08-28 morning, my weight = 55.8 kg, BMI = 20.249

2010-09-28 morning, my weight = 55.8 kg, BMI = 20.249

2010-10-28 morning, my weight = 55.4 kg, BMI = 20.104

2010-11-28 morning, my weight = 55.6 kg, BMI = 20.177

2010-12-28 morning, my weight = 55.5 kg, BMI = 20.140

2011

2011-01-28 morning, my weight = 55.4 kg, BMI = 20.104

2011-02-28 morning, my weight = 56.5 kg, BMI = 20.503

2011-03-28 morning, my weight = 55.6 kg, BMI = 20.177

2011-04-28 morning, my weight = 55.7 kg, BMI = 20.213

2011-05-28 morning, my weight = 55.6 kg, BMI = 20.177

2011-06-28 morning, my weight = 56.3 kg, BMI = 20.431

2011-07-28 morning, my weight = 56.5 kg, BMI = 20.503

2011-08-28 morning, my weight = 56.9 kg, BMI = 20.649

2011-09-28 morning, my weight = 56.2 kg, BMI = 20.394

2011-10-28 morning, my weight = 56.8 kg, BMI = 20.613

2011-11-28 morning, my weight = 59.0 kg, BMI = 21.410

2011-12-28 morning, my weight = 60.3 kg, BMI = 21.882

2012

2012-01-28 morning, my weight = 61.5 kg, BMI = 22.318

2012-02-28 morning, my weight = 62.7 kg, BMI = 22.753

2012-03-28 morning, my weight = 62.5 kg, BMI = 22.681

2012-04-28 morning, my weight = 61.3 kg, BMI = 22.246

2012-05-28 morning, my weight = 60.7 kg, BMI = 22.028

2012-06-28 morning, my weight = 60.6 kg, BMI = 21.992

2012-07-28 morning, my weight = 61.2 kg, BMI = 22.209

2012-08-28 morning, my weight = 60.8 kg, BMI = 22.064

2012-09-28 morning, my weight = 61.5 kg, BMI = 22.318**

2012-10-28 morning, my weight = 62.3 kg, BMI = 22.608

2012-11-28 morning, my weight = 63.4 kg, BMI = 23.008###

2012-12-28 morning, my weight = 62.9 kg, BMI = 22.826

2013

2013-01-28 morning, my weight = 63.0 kg, BMI = 22.863

2013-02-28 morning, my weight = 62.1 kg, BMI = 22.536

2013-03-28 morning, my weight = 61.5 kg, BMI = 22.318

2013-04-28 morning, my weight = 63.1 kg, BMI = 22.899****

2013-05-28 morning, my weight = 62.3 kg, BMI = 22.608

2013-06-28 morning, my weight = 62.2 kg, BMI = 22.572

2013-07-28 morning, my weight = 62.4 kg, BMI = 22.645

2013-08-28 morning, my weight = 62.6 kg BMI = 22.717

2013-09-28 morning, my weight = 62.4 kg BMI = 22.645**

2013-10-28 morning, my weight = 62.3 kg BMI = 22.609

2013-11-28 morning, my weight = 63.1 kg BMI = 22.899

2013-12-28 morning, my weight = 64.4 kg BMI = 23.371###

2014

2014-01-28 morning, my weight = 63.6 kg, BMI = 23.080###

2014-02-28 morning, my weight = 63.3 kg, BMI = 22.971

2014-03-28 morning, my weight = 62.7 kg, BMI = 22.753

2014-04-28 morning, my weight = 62.7 kg, BMI = 22.753

2014-05-28 morning, my weight = 62.9 kg, BMI = 22.826

2014-06-28 morning, my weight = 63.1 kg BMI = 22.899

2014-07-28 morning, my weight = 62.7 kg, BMI = 22.753

2014-08-28 morning, my weight = 62.2 kg, BMI = 22.572

2014-09-28 morning, my weight = 61.2 kg, BMI = 22.209

2014-10-28 morning, my weight = 61.4 kg, BMI = 22.282

2014-11-28 morning, my weight = 60.2 kg, BMI = 21.846

2014-12-28 morning, my weight = 60.8 kg, BMI = 22.064

2015

2015-01-28 morning, my weight = 61.3 kg, BMI = 22.246

2015-02-28 morning, my weight = 61.8 kg, BMI = 22.427

2015-03-28 morning, my weight = 61.8 kg, BMI = 22.427

2015-04-28 morning, my weight = 62,5. kg, BMI = 22.681

2015-05-28 morning, my weight = 62.4 kg, BMI = 22.645

2015-06-28 morning, my weight = 63.6 kg, BMI = 23.080###

2015-07-28 morning, my weight = 62.3 kg BMI = 22.609

2015-08-28 morning, my weight = 62.2 kg, BMI = 22.572

2015-09-28 morning, my weight = 63.0 kg, BMI = 22.863

2015-10-28 morning, my weight = 63.2 kg, BMI = 22.935

2015-11-28 morning, my weight = 62.6 kg, BMI = 22.717

2015-12-28 morning, my weight = 62.3 kg BMI = 22.609

2016

2016-01-28 morning, my weight = 63.0 kg, BMI = 22.863

2016-02-28 morning, my weight = 62.8 kg, BMI = 22.790

2016-03-28 morning, my weight = 62.0 kg, BMI = 22.499

2016-04-28 morning, my weight = 62.0 kg, BMI = 22.499

2016-05-28 morning, my weight = 62.4 kg, BMI = 22.645

2016-06-28 morning, my weight = 62.1 kg, BMI = 22.536

2016-07-28 morning, my weight = 62.2 kg, BMI = 22.572

2016-08-28 morning, my weight = 62.6 kg, BMI = 22.717

2016-09-28 morning, my weight = 62.8 kg, BMI = 22.790

2016-10-28 morning, my weight = 62,5. kg, BMI = 22.681

2016-11-28 morning, my weight = 62.1 kg, BMI = 22.536

2016-12-28 morning, my weight = 62.3 kg, BMI = 22.608

2017

2017-01-28 morning, my weight = 62.9 kg, BMI = 22.826

2017-02-28 morning, my weight = 62.4 kg, BMI = 22.644

2017-03-28 morning, my weight = 62.8 kg, BMI = 22.789

2017-04-28 morning, my weight = 62.3 kg, BMI = 22.609

2017-05-28 morning, my weight = 62.2 kg, BMI = 22.572

2017-06-28 morning, my weight = 62.6 kg, BMI = 22.717

2017-07-28 morning, my weight = 62.4 kg, BMI = 22.645

2017-08-28 morning, my weight = 61.9 kg, BMI = 22.463

2017-09-28 morning, my weight = 62.0 kg, BMI = 22.499

2017-10-28 morning, my weight = 62.0 kg, BMI = 22.499

2017-11-28 morning, my weight = 61.5 kg, BMI = 22.318

2017-12-28 morning, my weight = 61.5 kg, BMI = 22.318

2018

My Weight 2018-01-28 0934 hr 61.0 kg BMI 22.136

My Weight 2018-02-28 0915 hr 60.7 kg BMI 22.027

My Weight 2018-03-28 0620 hr 61.0 kg BMI 22.136

My Weight 2018-04-28 1005 hr 61.7 kg BMI 22.390

My Weight 2018-05-28 0856 hr 60.5 kg BMI 21.955

My Weight 2018-06-28 0600 hr 61.4 kg BMI 22.281

My Weight 2018-07-28 0600 hr 62.2 kg BMI 22.572

My Weight 2018-08-28 0720 hr 61.4 kg BMI 22.281

My Weight 2018-09-28 0805 hr 62.1 kg BMI 22.535

My Weight 2018-10-28 0750 hr 61.3 kg BMI 22.24

My Weight 2018-11-28 1000 hr 61.5 kg BMI 22.318

My Weight 2018-12-28 0650 hr 62.5 kg BMI 22.681

2019

2019-01-28 at 1000 hr 60.9 kg BMI 22.100

2019-02-28 at 0946 hr 61.0 kg BMI 22.136

2019-03-28 at 0700 hr 62.4 kg BMI 22.644

2019-04-28 at 0828 hr 62.9 kg BMI 22.826

2019-05-28 at 0745 hr 62.4 kg BMI 22.826

2019-06-28 at 0650 hr 62.4 kg BMI 22.644

2019-07-28 at 0736 hr 62.8 kg BMI 22.789

2019-08-28 at 0629 hr 62.4 kg BMI 22.644

2019-09-28 at 0644 hr 61.9 kg BMI 22.463

2019-10-28 at 0740 hr 62.5 kg BMI 22.681

2019-11-28 at 0632 hr 62.8 kg BMI 22.789

2019-12-28 at 0726 hr 62.5 kg BMI 22.681

2020

My Weight 2020-01-28 0625 HR  62.6 kg BMI 22.717

My Weight 2020-02-28 0728 HR  62.3 kg BMI 22.608

My Weight 2020-03-28 0649 HR  61.4 kg BMI 22.281

My Weight 2020-04-28 0810 HR  62.0 kg BMI 22.499

My Weight 2020-05-28 0714 HR  62.3 kg BMI 22.608

My Weight 2020-06-28 0757 HR  60.2 kg BMI 21.846

My Weight 2020-07-28 0715 HR  61.6 kg BMI 22.354

My Weight 2020-08-28 0707 HR  61.1 kg BMI 22.173

My Weight 2020-09-28 0609 HR  60.8 kg BMI 22.064

My Weight 2020-10-28 0818 HR  60.7 kg BMI 22.027

My Weight 2020-11-28 0706 HR  60.9 kg BMI 22.100

My Weight 2020-12-28 0631 HR  60.5 kg BMI 21.955

2021

My Weight 2021-01-28 0638 HR  61.3 kg BMI 22.245

My Weight 2021-02-28 0741 HR  61.2 kg BMI 22.209

My Weight 2021-03-28 0659 HR  61.3 kg BMI 22.245

My Weight 2021-04-28 0659 HR  61.1 kg BMI 22.173

My Weight 2021-05-28 0618 HR  61.1 kg BMI 22.173

My Weight 2021-06-28 0604 HR  61.3 kg BMI 22.245

My Weight 2021-07-28 0642 HR  61.2 kg BMI 22.209

My Weight 2021-08-28 0653 HR  61.5 kg BMI 22.318

My Weight 2021-09-28 0618 HR  61.5 kg BMI 22.318

My Weight 2021-10-28 0549 HR  61.0 kg BMI 22.136

My Weight 2021-11-28 0630 HR  61.3 kg BMI 22.245

My Weight 2021-12-28 0528 HR  61.6 kg BMI 22.354

======================================

2022

My Weight 2022-01-28 0910 HR  61.1 kg  BMI 22.173

My Weight 2022-02-28 0642 HR  61.2 kg  BMI 22.209

My Weight 2022-03-28 0649 HR  61.4 kg  BMI 22.281

My Weight 2022-04-28 0649 HR  61.4 kg  BMI 22.281

My Weight 2022-05-28 0549 HR  61.0 kg  BMI 22.136

My Weight 2022-06-28 0549 HR  61.0 kg  BMI 22.136

My Weight 2022-07-28 0700 HR  60.6 kg  BMI 21.991

My Weight 2022-08-28 0640 HR  61.3 kg  BMI 22.245

My Weight 2022-09-28 0738 HR  61.7 kg  BMI 22.390

My Weight 2022-10-28 0708 HR  61.5 kg  BMI 22.318

My Weight 2022-11-28 0706 HR  60.9 kg BMI 22.100

My Weight 2022-12-28 0722 HR  61.1 kg  BMI 22.173

========

2023

My Weight 2023-01-28 0537 HR 60.9 kg BMI 22.100

My Weight 2023-02-28 0515 HR 61.4 kg  BMI 22.281

My Weight 2023-03-28 0606 HR  61.3 kg  BMI 22.245

My Weight 2023-04-28 0738 HR  61.3 kg  BMI 22.245

My Weight 2023-05-28 0721 HR  61.0 kg  BMI 22.136

My Weight 2023-06-28 0641 HR  61.2 kg  BMI 22.209

My Weight 2023-07-28 0700 HR  60.9 kg BMI 22.100

My Weight 2023-08-28 0655 HR  61.3 kg  BMI 22.245

My Weight 2022-09-28 0738 HR  61.7 kg  BMI 22.390

My Weight 2022-10-28 0708 HR  61.5 kg  BMI 22.318

My Weight 2023-11-28 0612 HR 61.4 kg  BMI 22.281

My Weight 2023-12-28 0734HR  61.3 kg  BMI 22.245


========

2024

My Weight 2024-01-28 0734 HR  61.3 kg BMI 22.245

My Weight 2024-02-28 0510 HR  61.6 kg BMI 22.354

My Weight 2024-03-28 0642 HR  60.9 kg BMI 22.100

My Weight 2024-04-28 0721 HR  61.1 kg BMI 22.173

My Weight 2024-05-28 0537 HR  61.3 kg BMI 22.245

My Weight 2024-06-28 0651 HR  61.5 kg BMI 22.318

My Weight 2024-07-28 0612 HR 61.4 kg  BMI 22.281

My Weight 2024-08-28 0747 HR  61.1 kg BMI 22.173

My Weight 2024-09-28 0640 HR  61.1 kg BMI 22.173

My Weight 2024-10-28 0546 HR  61.5 kg BMI 22.318

My Weight 2024-11-28 0706 HR 61.4 kg  BMI 22.281

My Weight 2024-12-28 0649 HR 61.9 kg BMI 22.463

=======================================

2025

My Weight 2025-01-28 0625 HR  61.6 kg BMI 22.354

My Weight 2025-02-28 0742 HR  61.5 kg BMI 22.318


Note:

My current BMI is within the healthy range of 18.5 to 22.9.

For me, the range of healthy weight is 50.9786 kg (BMI = 18.5) to 63.10324 kg (BMI = 22.9).

People with BMI values of 23 kg/m2 (or 25 kg/m2 according to some sources) and above have been found to be at risk of developing heart disease and diabetes.

To be healthy, I must have a healthy weight.

Be as lean as possible without being underweight, as recommended by World Cancer Prevention Foundation, United Kingdom.

=================================

Note: On 2021-05-28, I removed the unimportant details of old records from My Weight Management Records.

=================================


Ref. WeightManagement




World's Best Hospitals 2025 ---> 9. Singapore General Hospital

World's Best Hospitals 2025

9. Singapore General Hospital 

https://www.newsweek.com/rankings/worlds-best-hospitals-2025

飛碟聯播網《飛碟早餐 唐湘龍時間》2025.02.28 新加坡旅遊局 大中華區首席代表兼執行署長|潘政志:新加坡顛覆想像、驚喜連連!全新玩法、小...

My Weight 2025-02-28

My Weight
2025-02-28
0742 HR 
61.5 kg
BMI 22.318

==========

☝️My Weight in the morning of 2025-02-17was *63.9 kg* after I attended my friend's son wedding buffet lunch on 2025-02-16.

Then I started to control my food intake successfully.

And

My Weight
2025-02-28
0742 HR 
*61.5 kg*
BMI 22.318

======


Thursday, February 27, 2025

DeepSeek From China to the World: DeepSeek Makes AI Accessible to Everyone

 
*From China to the World: DeepSeek Makes AI Accessible to Everyone*

从中国走向全球:DeepSeek潜入寻常百姓家 AI人人可亲

https://www.zaobao.com.sg/lifestyle/feature/story20250221-5898165

(Translated from Chinese by Cici AI app)


February 21, 2025
 
黄少伟
联合早报
副刊高级记者
Huang Shaowei
Senior Reporter, Lianhe Zaobao Supplement

 
The emergence of DeepSeek, a Chinese artificial intelligence company, has sent shockwaves across the globe. Tech experts are dissecting the reasons for DeepSeek's success, applauding its efforts in lowering the barriers to entry for high-tech solutions, making AI readily available for small and medium-sized enterprises, and even individuals. This, they believe, can help countries like Singapore achieve scientific and economic progress with fewer resources.
 
On January 20, 2025, a previously unknown Chinese AI startup, DeepSeek, chose to launch its open-source reasoning model R1 on the same day US President Trump was inaugurated. The performance of R1 rivals that of the o1 model developed by global AI giant OpenAI. DeepSeek's sudden arrival has shaken up the global AI race, prompting countries to reassess China's rising influence and potential in artificial intelligence.
 
Reasoning models, as the name suggests, are large language models capable of reasoning. When faced with complex tasks, they can generate answers through multi-step reasoning and can enhance model performance by increasing resource allocation in post-training or online reasoning phases. Reasoning models are therefore seen as a new direction for the development of large language models.
 
Vladislav Tushkanov, team manager at Kaspersky’s AI Technology Research Center, says, "Reasoning models actually originated with the o1 model released by OpenAI last December. However, the o1 model is closed-source, and only paying users have access to it. DeepSeek R1, on the other hand, is free for users and even allows them to see its reasoning process. This has garnered considerable attention."
 
What are the benefits of an open-source reasoning model? Tushkanov responds: "You can examine the reasoning process, helping you better correct problems. If the model provides an incorrect answer, you can identify where the error occurred. Also, if the model's reasoning performs well, you can transfer the knowledge to smaller models, a process we call distillation, making deployment more convenient."
 
Li Boyang, Associate Professor at the School of Computer Science and Engineering at Nanyang Technological University, says, "Reasoning with large language models is a difficult technical problem. Not only did this unknown startup, DeepSeek, successfully implement it, but its reasoning accuracy is also comparable to that of OpenAI, the world's leading AI company. Secondly, DeepSeek claims to have completed model training with just $6 million and 2,000 NVIDIA H800 GPUs, showcasing its model efficiency. In contrast, OpenAI's GPT-4 is estimated to have cost between $80 million and $100 million to train."
 
DeepSeek has been forced to come up with innovative engineering solutions to significantly reduce the cost of model reasoning and training due to US restrictions on AI chips. One of the main innovations is to bypass CUDA (NVIDIA GPU's general-purpose parallel computing interface used to handle complex AI computations) and use another programming language, allowing DeepSeek engineers to better control GPU instruction execution and improve GPU utilization.
 
Li Boyang uses an air conditioner as an analogy: "Everyone uses a remote control to adjust the air conditioner. Pressing a button on the remote control can adjust the temperature by one degree, but it does not provide the ability to adjust it by half a degree. To achieve precise adjustments in half-degree increments, you need to directly control the internal components of the air conditioner. DeepSeek has bypassed the 'remote control' and directly connected to the air conditioner's internal system, using a lower-level programming language to send instructions to the GPU, leading to higher efficiency. This method is technically challenging."
 
DeepSeek has also adopted a Mixture of Experts (MoE) model. Multiple "experts" (smaller models) are combined, with each expert responsible for handling different types of data or tasks. The advantage of MoE is that it allows each expert to focus on their area of expertise, thus improving overall efficiency.
 
Anthony K.H. Tung (邓锦浩), Professor at the Department of Computer Science, School of Computing, National University of Singapore, explains: "There's a Chinese proverb: Three cobblers are smarter than one Zhuge Liang. DeepSeek has many cobblers, 256 to be exact. When answering questions, it doesn't have all 256 experts work together. Instead, the question is passed to eight experts, and they jointly give an answer. It's a divide-and-conquer model, and it doesn't need a very fast graphics card for training."
 
Anthony K.H. Tung (邓锦浩) is also the Head of Urban Sustainable Development AI at the National University of Singapore's AI Institute. He believes that DeepSeek's emergence has driven AI democratization: "I've always been worried that small and medium-sized enterprises don't have the resources to use AI. Previously, large language models required expensive equipment and talent. DeepSeek is open-source, free for anyone to use, and users with some technical knowledge can customize it. The distilled model can be used on everyday devices like phones or computers."
 
He also believes that DeepSeek can promote local scientific development: "We don't have as many resources as American tech companies. DeepSeek allows us to pursue scientific and economic progress with fewer resources."
 
Anthony K.H. Tung (邓锦浩) laughs and says: "You don't need a butcher knife to kill a chicken. ChatGPT is like using a butcher knife to kill a chicken, requiring a large processor for everything. DeepSeek makes your device smaller, consumes less electricity, and is more portable. Previously, we had no choice, but now we do."
 
Daniel Kahneman, a renowned American psychologist, categorizes human thinking patterns into System 1 and System 2. System 1 is an intuitive, unconscious thinking system; System 2 is a controlled, conscious thinking system.
 
Li Boyang says: "So far, the AI technologies we've built, like large language models, are very similar to System 1. However, logical reasoning and mathematical abilities require System 2. While DeepSeek surpasses previous systems in this area, it's far from perfect. For example, when multiplication involves too many digits, exceeding two two-digit numbers, DeepSeek gives incorrect answers. This simple mathematics, easily done by humans, cannot be executed correctly by DeepSeek. The lack of System 2 capabilities is a common problem for large language models, not just for DeepSeek but also for OpenAI's so-called reasoning models."
 
Many people are concerned about the safety of AI technology, especially in terms of personal data protection and privacy. DeepSeek was recently forced to be removed from South Korea due to privacy issues.
 
Tushkanov says, "People should distinguish between the DeepSeek model and the DeepSeek chatbot service. The cool thing about the DeepSeek model is that it's open-source. Basically, anyone can download it to their computer and run it entirely locally. By running it only on your own hardware, you can avoid the leakage of personal data and privacy."
 
On the other hand, DeepSeek also offers a chatbot service. This cloud service, similar to ChatGPT, Google Gemini, etc., has the same advantages and risks. Tushkanov says, "Data leakage is possible. For example, researchers found a security vulnerability in a database used by DeepSeek, which was quickly patched by DeepSeek."
 
Many users have attempted to ask DeepSeek about sensitive political issues, such as the "June Fourth" incident, Taiwan's sovereignty, Tibet, and Xinjiang. DeepSeek either refuses to answer or provides answers consistent with the Chinese government's stance, sparking widespread discussion.
 
Regarding this, Tushkanov says: "Every company must comply with the laws of its country. Different AI service companies have different legal limitations. This is not a technical issue or related to safety."
 
DeepSeek's emergence has had a significant impact on OpenAI. Perhaps due to pressure, OpenAI quickly launched its o3-mini reasoning model on January 23rd, its first time making a reasoning model available to free users.
 
Sam Altman, CEO of OpenAI, followed up quickly, announcing on February 13th that OpenAI would be releasing the GPT-5 model in the coming months and making ChatGPT available for free and unlimited use to its free users. GPT-5 will integrate the o1 and o3 reasoning models with the GPT series models, creating a new system that "can automatically choose thinking and non-thinking functions, suitable for various tasks."
 
Several US technology companies have also begun using DeepSeek models. Microsoft announced it will deploy DeepSeek-R1 on its Azure cloud service. Additionally, a simplified version of DeepSeek-R1 has been incorporated into the model directory of Microsoft's Azure AI Foundry and GitHub, allowing developers to run it on their personal computers.
 
NVIDIA's developer website has also included DeepSeek-R1 in its "Most Popular Models" category and is available for use on NVIDIA NIM microservices. The developer website calls DeepSeek-R1 "a state-of-the-art and efficient large language model" that excels in reasoning, mathematics, and coding.
 
In addition, Amazon Web Services (AWS) has allowed users to deploy the "powerful and cost-effective" DeepSeek-R1 model on its two AI service platforms.
 
Across the Atlantic, DeepSeek's ecosystem in China is expanding rapidly. Tencent, a Chinese tech giant, has certified a gray-scale test of DeepSeek integration with its communication application WeChat on February 16th. A gray-scale test involves releasing a product or application to a specific group of users before its official launch, gradually expanding the user base to identify and rectify any issues.
 
Reportedly, WeChat users can access the "AI Search" feature in the top search bar of their chat window and use the DeepSeek-R1 model for free. The AI search function not only integrates information sources within the Tencent ecosystem, such as WeChat Public Accounts and Video Accounts, but also supports web searches, providing users with more comprehensive answers.
 
Following WeChat, Baidu Search announced on the same day that it would fully integrate DeepSeek and its own Wenxin large language model deep search function. Subsequently, the Wenxin Intelligent Entity Platform announced it would also fully integrate DeepSeek. This platform is designed for developers to create various AI products.
 
Currently, over 200 Chinese companies have announced their integration with DeepSeek, including Huawei, Alibaba, JD.com, and more, covering industries such as telecommunications, cloud computing, chips, finance, automobiles, and mobile phones.
 
======.

The end of the article



从中国走向全球:DeepSeek潜入寻常百姓家 AI人人可亲

从中国走向全球:DeepSeek潜入寻常百姓家 AI人人可亲
https://www.zaobao.com.sg/lifestyle/feature/story20250221-5898165
zaobao singaporeZaobao

从中国走向全球:DeepSeek潜入寻常百姓家 AI人人可亲

发布/2025年2月21日 05:00
中国人工智能“深度求索”(DeepSeek)的诞生震惊全球,科技专家剖析个中原因,并乐见高科技降低门槛,让中小企业,甚至是个人都唾手可得,借人工智能改善生活提升效率。我国更可以较少的资源发展科学和经济。
DeepSeek推理模型R1,性能几乎可媲美OpenAI的模型。 (路透社)






2025年1月20日,一家默默无闻的中国人工智能(AI)初创公司“深度求索”(DeepSeek)选择在美国总统特朗普就职当天,发布一款开源的推理模型R1,性能几乎可媲美全球AI巨头OpenAI的o1模型。DeepSeek的横空出世撼动了全球AI竞赛的格局,促使各国重新审视中国在人工智能领域的崛起与潜力。

推理模型,顾名思义指具备推理能力的大语言模型。推理模型在面对复杂任务场景时,可以通过多步骤推理生成答案,且能通过在后训练或在线推理阶段加大资源投入,提升模型性能。推理模型因此被视为大语言模型发展的新方向。



卡巴斯基人工智能技术研究中心的团队经理图什卡诺夫说,DeepSeek R1免费给用户使用并且又是开源,因此引起人们极大关注。(受访者提供)

网络安全公司卡巴斯基(Kaspersky)人工智能技术研究中心的团队经理图什卡诺夫(Vladislav Tushkanov)说:“推理模型其实始于OpenAI去年12月发布的o1模型,不过o1模型是闭源的,而且只有付费用户才能使用。DeepSeek R1则是免费给用户使用,并且还开放给大家看它的推理思维方式,因此引起人们极大关注。”

开源的推理模型有什么好处?图什卡诺夫回答:“你可以检视推理过程,更好地修正问题。如果模型给出错误答案,你可以找出哪里出错。另外,如果模型的推理表现很好的话,你可以把知识转移到较小模型中,我们称之为蒸馏(distillation),这样更方便部署。 ”



南洋理工大学计算机与数据科学学院副教授李搏扬说,要大型语言模型推理并不容易实现。(陈渊庄摄)

南洋理工大学计算机与数据科学学院副教授李搏扬说:“大型语言模型做推理是困难的技术问题。DeepSeek这家不知名的初创公司不仅成功实现,在推理准确度上也足以匹敌世界第一的OpenAI。其次是模型效率,DeepSeek称仅耗资600万美元,使用2000块英伟达H800图形处理器(GPU)就完成了模型训练。据外界估计,OpenAI的GPT-4的训练成本高达8000万至1亿美元。”

绕过“遥控器”发指令技术难度高

由于受到美国的AI晶片管制,DeepSeek被迫想出一系列工程创新,大大降低模型推理和训练成本。其中一大创新是绕过CUDA(英伟达GPU晶片的通用并行计算接口,用来处理人工智能的复杂计算),采用另一个编程语言,使DeepSeek工程师能够更好地控制GPU指令的执行,提高GPU利用率。

李搏扬以冷气机为比喻:“每个人都用遥控器来调节冷气,遥控器每按键一次可调节温度一度,不提供一次调节半度的功能。若想实现以半度为单位的精准调节,则需要直接控制冷气机内部的元件。DeepSeek所做的就是绕过‘遥控器’,直接与冷气机内部系统对接,使用更底层的编程语言向GPU发指令,因此效率较高。这种做法技术难度较高。”

DeepSeek也采用了“混合专家”(Mixture of Experts,简称MoE)模型。多个“专家”(即小型模型)组合在一起,每个专家负责处理不同类型的数据或任务。MoE的优点是可以让每个专家专注于自己擅长的领域,从而提高整体的效率。



新加坡国立大学计算机学院计算机科学系教授邓锦浩认为DeepSeek的出现,推动了AI民主化。(龙国雄摄)

新加坡国立大学计算机学院计算机科学系教授邓锦浩解释:“中国有句俗话说:三个臭皮匠赛过一个诸葛亮。DeepSeek有很多个臭皮匠,准确来讲是256个小专家。回答问题的时候,它不是由256个专家一起回答,而是把问题传给八个专家,再由八个专家综合给出一个答复。它是分而治之的模型,并不需要一个很快的显卡来训练。”

DeepSeek可免费使用还可客制化

邓锦浩也是国大人工智能院城市可持续发展人工智能的负责人。他认为DeepSeek的出现,推动了AI民主化:“我一直很担心中小企业没有资源去用AI。之前的大语言模型都需要昂贵的设备和人才才能运用。DeepSeek是开源的,任何人可以免费使用,用户若具备一些技术知识还可客制化。蒸馏的模型可以放在一个平常的设备,例如手机或电脑使用。”

他认为DeepSeek也可以推动本地的科学发展:“我们没有美国科技公司那么多的资源,DeepSeek能让我们以较少的资源去发展科学和经济领域。”

邓锦浩笑说:“杀鸡不需要用牛刀。ChatGPT就是杀鸡用牛刀的概念,样样都要这么大的处理器。DeepSeek让你的设备变得更小,用电量更少,更便携。大家之前没有选择,现在可以选。”

美国知名心理学家(Daniel Kahneman)把人类的思考模式分为快思考(系统1)和慢思考(系统2)。系统1是依赖直觉的、无意识的思考系统;系统2是需要主动控制、有意识进行的思考系统。

李搏扬说:“到目前为止,我们构建的AI技术,如大型语言模型,与系统1很相似。然而逻辑推理和数学能力需要系统2,这方面DeepSeek虽然超过前一代的系统,但是远非完美。例如,当乘法中数字的位数过多,超过两个十位数相乘,DeepSeek会给出错误答案。这种在人类看来很简单的数学,DeepSeek却无法正确执行。缺乏系统2能力是大型语言模型的普遍问题,不仅DeepSeek,OpenAI的所谓推理模型也面临同样问题。”

许多人很关注AI技术的安全性,特别是在个人数据保护和隐私方面。DeepSeek最近就因隐私问题在韩国被迫下架。

图什卡诺夫说,人们要分清DeepSeek模型和DeepSeek聊天机器人服务:“DeepSeek模型一个很酷之处是它是开源的。基本上,任何人都可以下载到自己的电脑上,并且完全本地运行。你只在自己的硬件上运行,就能避免个人数据和隐私外泄。”

另一方面,DeepSeek也提供了聊天机器人服务。这类云服务与ChatGPT、谷歌Gemini等类似,具有相同的优势和风险。图什卡诺夫说,数据可能会外泄,例如有研究人员曾发现DeepSeek使用的一个数据库有安全漏洞,而DeepSeek很快便修复这一漏洞。

不少用户尝试向DeepSeek询问一些比较敏感的政治话题,如“六四”事件、台湾主权、西藏、新疆议题等,DeepSeek不是不回答,就是提供中国官方立场的答案,引发了广泛讨论。

对此,图什卡诺夫说:“每家公司都必须遵守所在国家的法律。不同AI服务公司在法律上能做的事有所不同。这并不是一个技术问题,也与安全无关。”

DeepSeek对中美科企的影响

DeepSeek问世对OpenAI产生了巨大冲击。或是迫于压力,OpenAI在美国时间1月23日迅速推出o3-mini推理模型,也是该公司首次向免费用户开放的推理模型。

OpenAI首席执行官奥尔特曼(Sam Altman)随后快马加鞭,2月13日宣布OpenAI将在未来几个月内推出GPT-5模型,并让ChatGPT免费用户无限使用。GPT-5将把o1及o3推理模型与GPT系列模型整合在一起,打造一个全新系统,“能自动选择思考和非思考功能,适用于多种任务”。

多家美国科技公司也争相开始使用DeepSeek模型。微软宣布将DeepSeek-R1部署在自家的Azure云服务上。此外,DeepSeek-R1的精简模型也被纳入微软平台Azure AI Foundry和GitHub的模型目录,让开发者在个人电脑上运行。

英伟达(Nvidia)开发者网站也将DeepSeek-R1模型纳入“最受欢迎模型”栏目,且已可在NVIDIA NIM微服务上使用。英伟达开发者网站称,DeepSeek-R1模型是最先进、高效的大型语言模型,在推理、数学和编码方面表现出色。

此外,亚马逊云科技(AWS)也让用户在旗下两大AI服务平台上部署“功能强大、成本效益高”的DeepSeek-R1模型。

在大西洋的另一端,DeepSeek在中国的生态圈也日益扩大。中国科技巨头腾讯旗下的通讯应用微信2月16日认证灰度测试接入DeepSeek。所谓灰度测试,是在某项产品或应用正式发布前,选择特定人群试用,逐步扩大其试用者数量,以便及时发现和纠正问题。

据悉,微信用户可在对话框顶部搜索入口看到“AI搜索”字样,点入后可免费使用DeepSeek-R1模型。AI搜索功能不仅整合微信公众号、视频号等腾讯生态内的信息源,还支持联网搜索,为用户提供更全面的回答。

继微信之后,中国百度搜索在同一天也宣布将全面接入DeepSeek和自家的文心大模型深度搜索功能。随后,文心智能体平台宣布也将全面接入DeepSeek,该平台是给开发者用来打造各种AI产品的。

目前已有超过200家中国企业宣布接入DeepSeek,包括华为、阿里、京东等,覆盖基础电信、云计算、芯片、金融、汽车、手机等领域。

本文章为订户专享内容

订户登录后,即可阅读全文!

无论是保健、美食、旅游、居家还是流行文化资讯,随心选看,休闲自在。立即订阅《联合早报》!

早报数码配套个人版(每月付费)

每月 $9.90*

立即订阅

*无合约


购买此文章
Close gift modal
Gift subscribe icon订户专享 赠阅文章
选择赠阅文章将生成赠阅链接,您本月的余额将减少一次。链接分享期限为 30 天。
  • Link to facebook
  • Link to twitter
  • Link to wechat
  • Link to whatsapp
  • Link to telegram
您之前已生成过这篇文章的赠阅链接,您本月的余额保持不变。了解更多