📢 Gate Square #Creator Campaign Phase 2# is officially live!
Join the ZKWASM event series, share your insights, and win a share of 4,000 $ZKWASM!
As a pioneer in zk-based public chains, ZKWASM is now being prominently promoted on the Gate platform!
Three major campaigns are launching simultaneously: Launchpool subscription, CandyDrop airdrop, and Alpha exclusive trading — don’t miss out!
🎨 Campaign 1: Post on Gate Square and win content rewards
📅 Time: July 25, 22:00 – July 29, 22:00 (UTC+8)
📌 How to participate:
Post original content (at least 100 words) on Gate Square related to
AI privacy collapse "ChatGPT conversations" run naked before the law; Altman: I fear entering personal data, it's hard to know who will get the information.
OpenAI CEO Sam Altman warns: Conversations with ChatGPT are not protected by legal privilege and may become evidence in lawsuits, raising concerns about AI privacy gaps and surveillance expansion. (Background: Sam Altman reflects on the future of AI as a newbie dad: Humanoid robots are coming, are you prepared?) (Additional context: Can AI turn the case? A lawyerless woman uses ChatGPT to unearth a $5 million inheritance fraud, persuading the court to restart the investigation.) Artificial intelligence has seamlessly integrated into daily life, but the accompanying concerns are being drawn into the spotlight: Anything someone types into ChatGPT today could appear in court tomorrow. The gap in legal privilege OpenAI CEO Sam Altman recently issued a warning during a conversation with podcast host Theo Von: "When you share the most sensitive content with ChatGPT, if a lawsuit occurs, we may be required to provide that content. Currently, conversations with therapists, lawyers, or doctors enjoy legal privilege protection, but we have not established such a mechanism for ChatGPT conversations." This statement reveals a long-ignored gap. Medical records, attorney-client discussions, and religious confessions are protected by legal or ethical standards, and judicial requests must cross high thresholds; chatbots do not have this firewall, and the text interactions between users and models have no legal immunity—once a court subpoena is served, service providers must hand over the files. The conflict between policy and litigation OpenAI's official privacy policy states that data is transmitted encrypted, but does not offer end-to-end encryption. The document also states that user content "may be shared with third parties." This flexibility is common in business operations but has recently reached a peak of contradiction. In the New York Times copyright lawsuit, the court's order requiring OpenAI to retain all related user data directly collides with the company's public stance of "deleting or minimizing collection," highlighting the lack of fine classification in the current legal framework for AI services. As AI model training requires a large corpus, companies tend to retain text long-term to optimize algorithms; once official documents fall under judicial jurisdiction, users lose the right to erase. The legal and technological rhythms are out of sync, making conflict unavoidable. Altman stated: "This may be one reason I sometimes fear using certain AI technologies, because I don't know how much personal information I want to input, and I don't know who will have that information." The possibility of corporate surveillance After companies acquire personal data, there may be surveillance issues, which is not an abstract fear. Once chat content can be searched on a large scale, governments and companies have the motive to turn real-time conversations into risk assessments or behavior predictions. For professionals such as lawyers, doctors, and psychologists, using ChatGPT to answer client or patient questions may seem time-saving but could violate professional ethics, as service providers have the analysis rights over uploaded texts, equivalent to exposing client confidentiality to third parties. Altman further pointed out: "I worry that the more AI there is in the world, the more pervasive surveillance will become. History has repeatedly shown that governments often go too far in this regard, which genuinely makes me uneasy." Traditional attorney-client privilege requires information to flow only between the two parties; introducing AI adds an invisible pair of eyes, with no existing provisions granting the same level of protection. If professionals do not update contracts and workflows, they may unknowingly break the law. Toward solutions A safer AI ecosystem requires advancement along three axes. First, legislators should establish clear "digital legal privilege" rules, encrypting specific chats and incorporating high standards for judicial evidence. Secondly, companies must integrate end-to-end encryption, data minimization, and client self-management rights into product design rather than as an afterthought. Most importantly, users should proactively separate highly sensitive data from general inquiries and use traditional secure channels for communication when necessary. The efficiency brought by AI should not come at the expense of privacy. As chat interfaces gradually replace keyboards and phones, what people need is not silence, but verifiable legal protections and technical safeguards. Altman's reminder may not be pleasant, but it provides a mirror: reflecting the widening gap between the speed of innovation and institutional repair. Filling the gap is the only path for AI to truly benefit society. Related reports: Dreaming of RobotFi: What new gameplay do robots bring to the blockchain? Vitalik's remarks on robots 'meowing' spark heated discussions, Ethereum community: I've put all my money on those who can imitate cat sounds. Computing power is king! Jen-Hsun Huang discusses how AI will reshape global value chains, when will robots become widespread, and can AI accelerate the return of manufacturing to the U.S.? This article was first published in BlockTempo, "The Most Influential Blockchain News Media."