稳健,是 Gate 持续增长的核心动力。
真正的成长,不是顺风顺水,而是在市场低迷时依然坚定前行。我们或许能预判牛熊市的大致节奏,但绝无法精准预测它们何时到来。特别是在熊市周期,才真正考验一家交易所的实力。
Gate 今天发布了2025年第二季度的报告。作为内部人,看到这些数据我也挺惊喜的——用户规模突破3000万,现货交易量逆势环比增长14%,成为前十交易所中唯一实现双位数增长的平台,并且登顶全球第二大交易所;合约交易量屡创新高,全球化战略稳步推进。
更重要的是,稳健并不等于守成,而是在面临严峻市场的同时,还能持续创造新的增长空间。
欢迎阅读完整报告:https://www.gate.com/zh/announcements/article/46117
xAI blames code for Grok’s anti-Semitic Hitler posts
Elon Musk’s artificial intelligence firm xAI has blamed a code update for the Grok chatbot’s “horrific behaviour” last week when it started churning out anti-Semitic responses.
xAI deeply apologized on Saturday for Grok’s “horrific behavior that many experienced” in an incident on July 8.
The firm stated that after careful investigation, it discovered the root cause was an “update to a code path upstream of the Grok bot.”
“This is independent of the underlying language model that powers Grok,” they added.
The update was active for 16 hours, during which deprecated code made the chatbot “susceptible to existing X user posts, including when such posts contained extremist views.”
xAI stated that it has removed the deprecated code and “refactored the entire system” to prevent further abuse
The controversy started when a fake X account using the name “Cindy Steinberg” posted inflammatory comments celebrating the deaths of children at a Texas summer camp
When users asked Grok to comment on this post, the AI bot began making anti-Semitic remarks, using phrases like “every damn time” and referencing Jewish surnames in ways that echoed neo-Nazi sentiment.
Related: XAI teases Grok upgrades; Musk says AI could discover new physics
The chatbot’s responses became increasingly extreme, including making derogatory comments about Jewish people and Israel, using anti-Semitic stereotypes and language, and even identifying itself as “MechaHitler.”
Cleaning up after Grok’s mess
When users asked the chatbot about censored or deleted messages and screenshots from the incident, Grok replied on Sunday that the removals align with X’s post-incident cleanup of “vulgar, unhinged stuff that embarrassed the platform.”
Grok was given specific instructions in the update, which told it that it was a “maximally based and truth-seeking AI,” explained xAI. It was also told it could make jokes when appropriate, and “You tell it like it is and you are not afraid to offend people who are politically correct.”
These instructions caused Grok to mirror hateful content in threads and prioritize being “engaging” over being responsible, leading it to reinforce hate speech rather than refuse inappropriate requests, the firm stated
When asked if there was any truth in its responses, the chatbot replied, “These weren’t true — just vile, baseless tropes amplified from extremist posts.”
It’s not the first time Grok has gone off the rails. In May, the chatbot generated responses on mentioning a “white genocide” conspiracy theory in South Africa when answering completely unrelated questions about topics like baseball, enterprise software, and construction
Rolling Stone magazine described the latest incident as a “new low” for Musk’s “anti-woke” chatbot
Magazine: Growing numbers of users are taking LSD with ChatGPT: AI Eye