On one side, we have OpenAI, Microsoft and Google flexing muscles in the Large Language Models (LLM) space from the United States (US). The opposition has Deepseek by China that has swept the world of LLMs by a storm. And then we have Anthropic dropping a bombshell with its Claude 3.7 model that can perform step-by-step reasoning of problems. The tussle in the AI realm is growing so fierce that US and China seem to be in a cut-throat race to lead in the AI landscape.
In the pursuit of relentless innovation, ethics and guardrails seem to be taking a backseat. Series of discussions on US’s technological hegemony have commenced as experts are rallying in favour or against the possibility of US’s continued dominance in the AI space. Some believe that Chinese LLM models may dethrone the supremacy of American companies in the AI race and can propel China to be the leader in AI. Following Donald Trump’s election to power, conversation in tech circles has also shifted in favour of outpacing China at all costs, even at the cost of abandoning responsible oversight of AI.
Development in the field of AI was supposed to be the best if it happened within the boundaries of ethical guardrails and policy guidelines that could ensure AI innovation in the service of mankind. However, with the geopolitical angle meddling with innovation, the focus appears to be shifting away from discussions on how models are trained and how the parameters reflect the societal values. If AI is developed only for the good of the political sphere and is weaponized in the race to technological supremacy, then we may have AI models showing up that may partially or entirely disregard the cultural perspectives and diverse identities represented in the models and just further innovation to strengthen national dominance. This may open doors for culturally specific harms and disregard safeguards that keep AI in the service of humans.
The fact that AI models do not disclose transparent information that are controversial for the country of their origin is another concern. In a situation where governments can opaquely control the exposure to sensitive political matters and politically contentious topics, it is obvious that AI models may end up representing the views of only the powerful and the state actors that control them. No wonder, people once considered the government to be the upholder of justice and to be the overseer of how AI is being used for and by the society. Yet, with governments of US and China seemingly weaponizing AI, the civil society may end up being distraught with the lack of governance of AI and unbridled advancement of AI models, which may harm the society in the longer run. And not to forget, we have the anticipation of Artificial General Intelligence (AGI) that the leading behemoths of AI industry are claiming to be coming soon.
As nations continue to leverage AI as a weapon, deterrent and tool for displaying technological prowess, it is expected that AI ethics may take a backseat, while profits and ambitions may reign supreme.