Search results for "LLM"

OpenRouter, an AI company under the co-founders of OpenSea, raised $40 million in financing, led by a16z, with a valuation of $500 million.

Gate News bot news, led by the former co-founder of OpenSea, OpenRouter has completed a $40 million seed round and Series A financing, aimed at accelerating the development of its large language model aggregation platform. The financing was led by Andreessen Horowitz (a16z) and Menlo Ventures, with participation from Sequoia Capital and well-known industry angel investors. According to a source cited by The Wall Street Journal, the company's current valuation is $500 million.
More

The AI platform OpenRouter, founded by former OpenSea co-founder, has completed a $40 million Series A funding round, led by a16z and others.

The AI model marketplace platform OpenRouter recently announced the completion of a $40 million financing round, with a valuation of approximately $500 million. The funding will be used for product development and expanding enterprise support. Since its launch, it has attracted over 1 million developers, with annual inference spending rising rapidly, and its clients range from startups to multinational corporations.
More

Research: Long-term reliance on large language models like ChatGPT may impair cognitive abilities.

ChainCatcher message, research teams from MIT, Wellesley College, and the Massachusetts College of Art have recently discovered that long-term reliance on large language models (LLM) such as ChatGPT may impair cognitive abilities. The study shows that participants who used AI for writing had only half the number of Alpha wave connections in their brains compared to independent thinkers (42 vs. 79), and 83.3% were unable to accurately cite the content of their own articles. Researchers refer to this phenomenon as cognitive debt—reducing the mental load in the short term but leading to a decline in critical thinking and creativity in the long term.
More

$FLOCK Gains over 52% increase after announcing a strategic collaboration with qwen

Gate.io News
More
  • 1

Moonshot launches Large Language Model ($LLM)

Odaily Planet Daily News According to official sources, Moonshot has launched the Large Language Model ($LLM), with the current market capitalization of 94.7 million US dollars and 24-hour volume of 1.542
More
  • 1

Moonshot launches AI concept Meme coin LLM on Solana chain

MoonshotListing detected the launch of LLM, an AI concept meme coin, on the Solana blockchain. The market capitalization is currently reported at 96 million US dollars, with a 24-hour decline of 9.7%. LLM is inspired by the logo of ai16z. BlockBeats reminds users that meme coins like LLM have little practical use and experience significant price fluctuations, so investment should be approached with caution.
More
  • 1
  • 2

Dar Open Network announced the launch of two AI frameworks: DeAI Multi-Agent Framework and aiNFT

Dar Open Network announced the launch of two AI frameworks, the DeAI Multi-Agent Framework and aiNon-fungible Token. The DeAI Multi-Agent Framework is a platform for developers to build, deploy, and coordinate collaborative AI agents, while aiNon-fungible Token is an innovative tool that allows non-fungible token owners to unleash the potential of their digital assets by customizing AI agents and interacting dynamically with users.
More

Initia: Rena debuts and secures $3.3 million in Pre-Seed funding, launching TrustEE data AppChain

Initia's Rollup interoperability protocol Rena launched and raised $3.3 million in Pre-Seed financing. Rena uses TrustEE data AppChain to help developers build verifiable, privacy-preserving, and scalable AI applications on any on-chain Block. Rena is committed to providing users with AI ownership and verifiability, driving the development of decentralized finance, gaming/social applications, and multimodal LLM applications.
More

a16z Partner: LLM-type products have the potential to become disruptors in the search business

a16z general partner Andrew Chen proposed on social media whether OpenAI's ChatGPT has commercial potential given its impressive scale, and discussed issues such as per capita search frequency, single display revenue, and performance after adopting an advertising monetization model. He believes that products with a user base similar to Firefox browser can generate annual revenue close to $1 billion at their peak. If ChatGPT is free to use, the number of monthly active users may increase significantly. He also pointed out that large language model (LLM) products have the potential to disrupt the search business.
More
  • 3
  • 1

JPMorgan launches AI tool for analysts internally

BlockBeats news, on July 26th, according to Cointelegraph, investment banking giant JPMorgan is rolling out an internal version of a generative AI product similar to ChatGPT called LLM Suite, which is primarily designed to assist analysts in drafting, generating ideas, and summarizing documents. JPMorgan stated that LLM Suite is a "ChatGPT-like product" that can collaborate with other internal systems handling sensitive financial information to enhance "general productivity".
More
Odaily Planet Daily News Lightning Labs has launched new artificial intelligence Bitcoin tools designed to enable developers to seamlessly integrate Bitcoin and the Lightning Network into their artificial intelligence applications. The suite includes LLM Agent BitcoinTools, It allows developers to create AI agents that can hold Bitcoin balances, send/receive Bitcoin on the Lightning Network, and interact with Lightning Network daemon nodes. Additionally, this release includes the Aperture reverse proxy server, which supports Lightning Node Connect and provides dynamic API endpoint pricing. These tools will enhance the accessibility and functionality of AI infrastructure. (The Block)
Babbitt News, June 9th, according to the Chosun Ilbo report, it has been confirmed that Samsung is fully developing its own AI large language model (LLM) for internal use. This product is similar to ChatGPT. It has been developed since June and is led by Samsung Research Institute. . Sources said that almost all of Samsung's GPU computing resources have been invested in large-scale model training. The company plans to complete the development of the first version of LLM within two months. At present, Samsung plans to use this large model for document summary, software development and language translation, and has not yet decided whether to provide the product to consumers.
  • 2
PANews news on June 2, according to PR Newswire, blockchain analysis company Elliptic announced the integration of large language model (LLM) ChatGPT into its off-chain intelligence and research collection work, helping researchers and investigators to more quickly , Synthesize and organize intelligence on new risk factors on a larger scale. Integrating ChatGPT will allow Elliptic to understand exactly a client's risk exposure to make more informed and secure decisions, the company said.

Apple researchers: Mainstream AI models still cannot achieve the expected reasoning level of AGI.

Gate News bot news, researchers at Apple pointed out in a paper titled "The Illusion of Thinking" published in June that leading artificial intelligence (AGI) models still face difficulties in reasoning. Therefore, the race to develop general artificial intelligence (AGI) remains a long way to go. The article noted that the latest updates to mainstream artificial intelligence large language models (LLM) (such as OpenAI's ChatGPT and Anthropic's Claude) have included large reasoning models (LRM), but their basic functions, extended features, and limitations "are still not fully understood." Current assessments mainly focus on established mathematical and coding benchmarks, "emphasizing the accuracy of the final answers." However, researchers stated that this assessment does not delve into the reasoning capabilities of artificial intelligence models, which sharply contrasts with the expectation that general artificial intelligence could be achieved in just a few years.
More
  • 1

AI on-chain trading engine Brian announces termination, founder criticizes VC preference for speculative tokens

The AI intent recognition project Brian, launched at the 2023 ETHPrague Hackathon, has announced its termination due to loss of market advantage and financing issues, criticizing VC for overly focusing on TGE and hype while neglecting actual progress. The team had planned to launch a Web3-specific model and Token, but industry speculation and cost issues led to the project's failure.
More

Agency: In 2024, the total revenue of the world's top 10 IC design companies will increase by 49% year-on-year, and NVIDIA will account for half of the proportion

In 2024, the total revenue of the world's top ten IC design companies will be about 249.8 billion US dollars, an annual growth rate of 49%, of which NVIDIA will grow by as much as 125%. The outlook for 2025 shows that advanced semiconductor manufacturing processes will drive the growth of AI computing power, and large language models will continue to emerge, and edge AI devices will become the next wave of semiconductor growth drivers.
More

The SCIHUB community is developing the scientific AI Agents architecture SCAI, aiming to provide more accurate and reliable tools for researchers.

BlockBeats news, on January 13, according to official sources, the SCIHUB community has developed a new AI agent architecture SCAI (SCientific AI), aiming to provide more accurate and reliable tools for researchers. The goal of SCAI is to address the "hallucination" phenomenon of LLM in answering scientific questions, enabling AI
More
  • 4
  • 2

Reddit user claims ChatGPT initiates conversations based on historical message inference.

OpenAI announced that it will launch a more intelligent ChatGPT language model with reasoning capabilities and the ability to 'think before answering'. When asked about a user's first day of school experience, the model can infer the user's situation based on previous conversations, indicating that this is one of the new upgraded features. The new model is expected to have faster response times and independent in-depth research capabilities.
More
  • 8
  • 2
According to 36 Krypton’s report on June 5, the AI data company “Integer Intelligence” recently completed tens of millions of Pre A rounds of financing. Yishang Capital is the exclusive financing advisor. This round of financing is mainly used for the iterative upgrade of the intelligent data engineering platform (ABAVA Platform). The new ABAVA platform will integrate AI large models + small models to achieve efficient automatic data annotation. In addition, integer intelligence will also integrate RLHF (human feedback reinforcement learning) data service capabilities to provide solutions for the development and iteration of large language models (LLM).
According to 36 Krypton’s report on June 5, the AI data company “Integer Intelligence” recently completed tens of millions of Pre A rounds of financing. Yishang Capital is the exclusive financing advisor. This round of financing is mainly used for the iterative upgrade of the intelligent data engineering platform (ABAVA Platform). The new ABAVA platform will integrate AI large models + small models to achieve efficient automatic data annotation. In addition, integer intelligence will also integrate RLHF (human feedback reinforcement learning) data service capabilities to provide solutions for the development and iteration of large language models (LLM).

Gaia core developer Harish Kotra attended the ETHDenver 2025 summit and shared his views on open source LLMs

Gaia is a decentralized web project that aims to provide secure, censorship-resistant, and monetizable AI proxy services. Its core developer, Harish Kotra, shared the topic of running an open source LLM at ETHDenver 2025. GaiaNet is built on edge computing nodes controlled by individuals and businesses, designed to fuse everyone's know-how and skills while protecting privacy.
More

Vitalik: AI prediction market can make X community notes faster

Co-founder of Ethereum stated that artificial intelligence and prediction market technology can speed up the production of community notes on social media platforms. Community notes are a feature that allows communities to add context to potentially misleading posts. With prediction markets, people can use large AI language models and bots to determine if a post is worth following by the community.
More
  • 1

True Fund announced a donation to the open source AI project vLLM

Odaily Planet Daily News It was announced that ZhenFund has made a donation to the open source AI project vLLM. ZhenFund stated that the donation aims to promote the popularization of AI technology and benefit more people. vLLM is an open source large model inference acceleration framework developed by a team of three from the University of California, Berkeley. It supports more than 30 latest open source frameworks including Mistral and Llama.
More
Lenovo Group launched its first next-generation Copilot+ PC, the Lenovo Yoga Slim 7x and Lenovo ThinkPad T14s Gen 6, which are equipped with the latest Qualcomm Snapdragon XElite processor. According to reports, these two AIPC devices, powered by the new Snapdragon XElite processor, feature a 12-core Oryon CPU, Adreno GPU, and dedicated Hexagon NPU. With the enhanced features of Microsoft and Copilot+, users can now use large language models (LLM) even when offline.
According to Interface News on June 27, the database company MongoDB announced that it will cooperate with Google Cloud to accelerate developers' use of generative AI and the development of new applications. Developers can take advantage of MongoDB Atlas and integration with Google Cloud Vertex AI Large Language Model (LLM) to accelerate software development.
  • 2
IT House issued an article today stating that the blind heap volume of the AI model is actually not necessarily better. It depends more on the quality of the training data. Microsoft recently released a language model phi-1 with 1.3 billion parameters, using "textbook Level" high-quality data set training, it is said that "the actual effect is better than GPT 3.5 with 100 billion parameters". The model is based on the Transformer architecture, and the Microsoft team used "textbook-grade" data from the web and "logically rigorous content" processed with GPT-3.5, as well as eight Nvidia A100 GPUs, to complete the training in just 4 days . The Microsoft team said that rather than increasing the number of parameters of the model, improving the quality of the model's training data set may enhance the accuracy and efficiency of the model. Therefore, they used high-quality data to train the phi-1 model. In the test, the score of phi-1 reached 50.6%, which is better than GPT-3.5 (47%) with 175 billion parameters. Microsoft also stated that phi-1 will be open sourced in HuggingFace next, and this is not the first time Microsoft has developed a small LLM. Before that, they created a 13 billion parameter Orca, which was trained using GPT-4 synthetic data. The performance is also better than ChatGPT.
According to the Securities Times, on June 20, Jinshan Office held a 2022 annual performance briefing. Regarding the development of AI-related products, Zou Tao, chairman of the company, said that AI is one of the important product strategies of Kingsoft Office. At present, the company chooses to "walk on two legs", and will first choose to cooperate with technical service providers related to AIGC and LLM, and carry out product research and development through their technical empowerment; at the same time, they are also continuing to contact some excellent start-up companies. In addition, "WPSAI" related functions will be gradually launched, which is expected to fully empower the company's entire line of products.
  • 1
According to Businesskorea, according to South Korean sources, the Device Solutions (DS) division in charge of Samsung Electronics’ semiconductor business is developing its own large language model (LLM) of GPT-3.5 or higher. Customized artificial intelligence to improve productivity and work efficiency. It is reported that the service is expected to start its basic service as early as December this year, and provide professional search services including company knowledge and data in February next year.

The social blockchain Cyber will launch the encryption AI model Cyber.AI

The social blockchain Cyber will launch the Cyber.AI encryption AI model, which analyzes millions of X-posts and accounts, combined with encryption data from over 200,000 projects, to provide better support for intelligent AI agents. Cyber.AI also collaborates with @elizaOSai, allowing developers to access a more professional encryption knowledge base. The model is building an encryption assistant product and a native encryption base model to provide more accurate encryption queries and information.
More
  • 2
  • 1

The Allora development team has implemented intelligent trading management using DeepSeek AI Agent

Jinse Caijing reported that the Allora development team has achieved LLM trading judgment using DeepSeek as an AI Agent, and using Allora Network as an interactive platform on Hyperliquid
More

A certain Address early bought $63,000 worth of LLM, and the current Position value has risen to $1.3 million.

BlockBeats news, on January 9th, according to Arkham monitoring, a certain Address starting with DxjmH bought $63,000 worth of LLM within 3 hours of its listing on LLM. The current Position value has risen to $130,000.
More
  • 6
  • 3

A trader buys 2,300 LLM and earns over $500,000.

Odaily Planet Daily News According to Onchain Lens monitoring, a trader bought 2300 LLM coins and made a profit of over 50
More
  • 2
  • 1

A trader made 1,887 times profit by buying LLM early within 4 hours

BlockBeats news, on January 8th, according to Onchain Lens monitoring, a trader earned 500,000 US dollars in 4 hours by buying LLM early. The Address previously purchased 23 million LLM on PumpFun at a price of 1.37 SOL (about 269 US dollars). After the coin price pumped significantly, the Address closed all positions of LLM at a price of 2,594 SOL (about 507,000 US dollars). The profit reached 1887.
More
  • 4
  • 2
On June 12, the Golden Ten Data reported that, according to a survey by the global market research firm TrendForce, the first quarter saw a traditional off-season for consumer terminals. Although there were occasional urgent orders in the supply chain, most of them were individual customer inventory replenishment behaviors, and the momentum was slightly weak. At the same time, the demand for automotive and industrial control applications was affected by factors such as inflation, geopolitical conflicts, and energy. Only AI servers, under the trend of global CSP giants investing heavily and enterprises deploying large language models (LLMs), emerged as the only bright spot supporting the supply chain in the first quarter. Based on these factors, the quarterly output value of the top ten global wafer foundries in the first quarter decreased by 4.3% to $29.2 billion.
According to a report by the Financial Associated Press on June 14, Bank of America Securities issued a report stating that AMD recently announced its latest AI GPU chip "MI300 X", and predicts that the AI chip market will increase from US$30 billion this year to US$150 billion in 2027. . However, the bank remains "neutral" on AMD as the company did not mention any commercial customers participating in its new GPU products. AMD is still far behind Nvidia, as AMD will not start sampling AI chips until next quarter. In terms of timing, when AMD starts to stabilize with 5nm-based GPU accelerators, Nvidia may launch its next-generation 3nm-based, LLM-optimized Blackwell architecture GPU, further extending its already sizable lead in the AI market.

A certain Address sold 24.25 million LLM, missing out on a profit of $3 million.

According to Onchain Lens monitoring, an Address purchased 24.25 million LLM for 9.17 SOL yesterday, but sold these tokens at a loss only 2 minutes after the purchase, earning 8.27 SOL. Now the value of these tokens has exceeded 300.
More
  • 1
  • 1

LLMarket Cap exceeded 140 million dollars, with a 43.49% increase in 1H

Odaily Star Daily News GMGN data shows that LLM has pumped above 0.14 USDT and is now reported at 0.1403 USDT, with a 1-hour increase of 43.49%, and the Market Cap is temporarily reported at 141 million US dollars. Odaily reminds users, Meme
More
  • 1

LLM's top 5 whale positions are now floating profit of $670,000

According to on-chain analyst Ai Yi, the top 5 LLM Holdings whales bought 9.49 million LLM at a low price of $0.01949 last night, currently with a floating profit of $670,000 (with a cost of only $185,000, a return
More

A trader 'Cut Loss' 10,630,000 LLM, missing out on a potential profit of $1,500,000

BlockBeats news, on January 9th, according to Lookonchain monitoring, a trader sold 10.63 million LLM for $1725, which is now worth over $1.5 million. Yesterday, the trader spent 14 SOL ($2,767) to purchase 10.63 million LLM. However, shortly after his purchase, the price began to drop, so he sold all of them at a price of 8.74 SOL ($1,725).
More
  • 3
  • 1

Data: LLM Market Cap exceeds 100 million US dollars

ChainCatcher message, according to GMGN data, LLM has pumped through 0.1 US dollars, now reported at 0.125 US dollars, with a 30% pump in the past hour, Market Cap currently reported at 1.26
More
  • 2

A clever investor bought 23.76 million LLM 19 hours ago, currently with a floating profit of 289 times.

Odaily Planet News According to The Data Nerd monitoring, 19 hours ago, smart investor FdvYZ spent 23.76 SOL (about $4700) to buy 23.76 million LLM, which is now worth $1.36 million, with an ROI of 289x. It transfers 15 million LLM to the wallet.
More
  • 3
  • 2

AI meme coin LLM market value short-term breakthrough of 100 million US dollars, 24-hour trading volume exceeded 3.3 billion US dollars

BlockBeats news, on January 9th, according to GMGN market information, AI meme coin LLM issuance for 22 hours, Market Cap exceeded 100 million US dollars in a short time, now reported 99.8 million US dollars, 24-hour trading volume exceeded 3.3
More
  • 3

encryption and AI company Fraction AI completed a $6 million Pre-Seed financing, with Spartan Group and Symbolic Capital co-leading

PANews December 19th news, according to The Block, Fraction AI, a startup focusing on data annotation, encryption, and artificial intelligence, has completed a $6 million Pre-Seed financing round. Spartan Group and Symbolic Capital jointly led this round of financing, with other investors including Borderless Capital, Anagram, Foresight Ventures, and Karatage. In addition, angel investors such as Sandeep Nailwal, co-founder of Polygon, and Illia Polosukhin, co-founder of NEAR Protocol, also participated and served as advisors. Fraction
More

Solana ecosystem AI project CAI | CharacterX announces the launch of enterprise-level AI infrastructure CAI

CAI, an AI project in the Solana ecosystem, will launch an enterprise-level AI infrastructure solution called CharacterX. The COO stated that CAI has reached preliminary cooperation with multiple enterprises, with a monthly revenue of $500,000. The project has received investment from the Stanford Blockchain Accelerator and a seed round of millions of dollars.
More

Anthropic CEO: AI development has not slowed down

Odaily Planet Daily News According to Dario Amodei, CEO of Anthropic, the improvement speed of large language models is expected to not slow down, despite recent reports that the performance improvement of the company, as well as Google and OpenAI's upcoming LLM, is not significant compared to early models. According to earlier reports this month, the performance of OpenAI's latest flagship model Orion has not been improved. (The
More

Aptos Foundation collaborates with AI companies to build Move programming language tools.

Aptos Foundation collaborates with FLock.io to launch an artificial intelligence tool that makes coding with the blockchain custom programming language Move easier. FLock.io has developed a specialized large language model specifically tailored for Move on the Aptos network. Preliminary tests indicate that FLock.io's LLM performs well in generating Move-specific code, with significant improvements in code accuracy and readability.
More

Arweave AO deposits exceeded $158 million in the past week

Jinse Finance reported that the AI application computing protocol AO, based on Arweave, attracted over $158 million in deposits within a week after its fair launch. It is reported that AO is a new protocol built on the Arweave permanent data storage layer, which can achieve parallel application operation through decentralized computing models. It also enables AI applications to run on-chain, allowing large language models (LLMs) to run within smart contracts.
More
According to PANews news on July 7, according to The Block, Lightning Labs has launched a new artificial intelligence bitcoin tool. The tools are designed to help developers seamlessly integrate Bitcoin and the Lightning Network into their AI applications. The suite includes LLM Agent BitcoinTools, which allows developers to create AI agents that can hold Bitcoin balances, send/receive Bitcoin on the Lightning Network, and interact with Lightning Network Daemon nodes. Additionally, this release includes Aperture Reverse Proxy Server, which supports Lightning Node Connect and offers dynamic API endpoint pricing. These tools will enhance the accessibility and functionality of AI infrastructure.
Odaily Planet Daily News Bitcoin mining company Hive Blockchain said in an earnings call with analysts on Friday that the company will allow customers to train large-scale language artificial intelligence models in its data center, compared with competitors such as OpenAI's ChatGPT, The company claims better privacy. Aydin Kilic, CEO and President of the company said: “Companies are now noticing that they don’t want to upload sensitive customer data to companies like OpenAI that have a public LLM (Large Language Model). We want to provide privacy in Hive through Hive Cloud, Companies can have a service agreement, own their data and their privacy, and still run AI compute workloads on our GPUs." Shares of Hive rose nearly 2 percent on Nasdaq on Friday. (CoinDesk)
According to a report by IT House on June 25, developer Iván Martínez Toro recently launched the PrivateGPT open source model, which allows users to ask questions based on their own documents without an Internet connection. It is reported that the PrivateGPT open source model can be run locally on home devices, and the open source large language model (LLM) called gpt4 all needs to be downloaded before running. Then, instruct users to put all their related files into a directory so that the model pulls in all the data. After training the LLM, the user can ask the model any question and it will answer using the provided document as context. PrivateGPT can ingest more than 58,000 words and currently requires a lot of local computing resources (high-end CPU is recommended) to set up. Toro said that PrivateGPT is currently in the proof-of-concept (PoC) stage, which at least proves that a large model similar to ChatGPT can be created completely locally, and the potential of this PoC conversion into actual products can be foreseen, allowing companies to access personalization, security and privacy ChatGPT to increase productivity.