At Swarmcheck, we are passionate about thoughtful and ethical technology.
Check out our approach we take to designing our solutions.
Explainable AI
Most popular AI solutions act as so-called “black boxes.” This means that no one, including the developers of the technology, knows why the system made the decision it did. In other words, we don't know if the system made the decision based on correct premises, or perhaps it just happens to make that decision on Mondays.
In contrast, an expert AI system based on argumentation maps is completely transparent. To see how a decision was reached search for the conclusion of interest and check the sequence of supporting and undermining premises and their justifications and sources.
Human in the loop
Human-in-the-loop is an approach in which humans are actively involved in the development and operation of artificial intelligence systems, providing continuous support, supervision and feedback.
In the Swarmcheck system, decisions are made based on an interactive argument map. Accordingly, each user can improve the decision by adding more arguments. By reusing arguments, adding one relevant argument can thus improve multiple decisions simultaneously. Because users can refer to their arguments, the effect of decision control is controlled by collective intelligence.
In the next phases of development, it is also planned to introduce a decentralized moderation system, where randomly selected users will resolve conflicts by correctly using arguments combined by the system into a consensus.
Enhanced AI Reasoning
Argumentation mapping enhances critical thinking competence in those who use the method. Language models that use argumentation mapping similarly exhibit features of better inference. Our approach in automating legal reasoning can serve as an example. The collaboration of an expert argumentation system and large language models (LLMs) is a way to synergize the potential of the ever-evolving artificial intelligence and the collective intelligence of humans.
A key aspect of reasoning in the combination of LLM and computer-based argumentation modeling is the significant reduction of hallucinations. While LLM helps formulate arguments and summarize conclusions from the argumentation map (graph), and even participate in collective argumentation, every step of the reasoning itself is contained in the interactive argumentation map itself. This makes the decisions made logical, verifiable and trustworthy.
AI Alignemnt
AI and humanity value alignment (AI Alignment) is a desirable state in which the goals and actions of artificial intelligence coincide with those of humanity. The premise behind Swarmcheck's approach to AI alignment is to scale the phenomenon of value alignment that we have repeatedly observed among humans, i.e., the collaborative mapping of arguments. Such a process reveals the values, ideas and criticisms of all group members, including minority voices. The final map is not the decision of any single person, but the result of structured cooperation of the entire group.
By analogy, we want artificial intelligence, even if it achieves capabilities superior to humans, to take into account the opinion of individuals and our cooperation. This is, in our opinion, the most ethical and safe approach to developing artificial intelligence by incorporating it into human collective intelligence.
Value-based reasoning is also an important element of the data that argument maps contain. Decisions in argumentation maps are justified not only with ideas for its implementation, but also on the basis of desired values. Consequently, decisions are also grounded in combined discussions of values and in-depth collective reflection on them. This data is incredibly valuable for training new language models that understand well-thought-out human values in their own decision-making processes.