A researcher employed by ByteDance, the Chinese parent company of TikTok, was mistakenly added to a group chat for American artificial intelligence (AI) safety experts last week, the US National Institute of Standards and Technology (NIST) revealed Monday.
ByteDance Researcher Finds Its Way in US AI Safety Experts Group Chat
A person familiar with the matter told Reuters that the researcher was added to a Slack instance used for discussions between NIST's US Artificial Intelligence Safety Institute Consortium members.
NIST noted that a member of the consortium added the researcher as a volunteer. Upon realizing that the individual was a ByteDance employee, NIST told Reuters that the California-based researcher was promptly removed from the group "for violating the consortium's code of conduct on misrepresentation."
TikTok Being Scrutinized in the US
The presence of a ByteDance-affiliated researcher reportedly raised concerns in the consortium, considering that ByteDance is not a member of the group.
The incident attracted attention as TikTok is now at the center of a national debate over whether the app has opened an opportunity for the Chinese government to spy on Americans due to having access to a large amount of US data. The AI Safety Institute, which evaluates the risks of cutting-edge AI programs, was established under NIST.
According to Reuters, the consortium's founding members include hundreds of major US tech companies, AI startups, nongovernmental organizations, and universities, among others, that work to develop guidelines for the safe deployment of AI applications and to help AI researchers find security vulnerabilities in their models and fix them.
The Slack instance for the consortium reportedly includes some 850 users.
Join the Conversation