Weapons systems with an increasing number of autonomous features are emerging as revolutionary technologies of war. In particular, this concerns systems with autonomy in their critical functions that relate to selecting and engaging targets without human input. This weaponisation of Artificial Intelligence (AI) signals the looming absence of meaningful human control in warfare, which has become a central focus of the debate on autonomous weapons systems (AWS). Here, states either seek to introduce new norms governing AWS or to leave the field open in order to increase their room of manoeuvre. These uncertainties make monitoring to what extent AWS will shape and transform international norms governing the use of force a matter of great importance. But existing International Relations research on norms despite producing excellent critical work does not yet enable us to understand the dynamics of this vital process because it does not capture adequately how norms emerge and develop. The state of the art conceptually connects norms predominantly to international law and limits attention to how norms emerge in deliberative international forums. Instead, the AUTONORMS project will develop a new ground-breaking theoretical approach that allows us to study how norms, understood as standards of appropriateness, manifest and change in practices. Taking this bottom-up perspective, we will monitor norm emergence and change across four contexts of practices (military, transnational political, dual-use, and popular imagination) in four countries (USA, China, Japan, Russia). This flexible portrayal allows us to adequately understand how norms related to AWS will develop, as well as considering the impact such emerging norms have on the current international security order of which norms are constitutive building blocs. The project thus provides an innovative analytical model for studying uncertain processes of technological innovation associated with the AI revolution.