Paulo Shakarian is an Assistant Professor at Arizona State University where he directs the Cyber-Socio Intelligent Systems (CySIS) Lab. He is a recipient of the AFOSR Young Investigator Award, the DoD Minerva Award (as a co-PI), and the DARPA Service Chiefs’ Fellowship. Some of his work has been featured in the media including The Economist and Popular Science. He also also authored several books including Elsevier’s “Introduction to Cyber-Warfare” and Springer’s forthcoming “Diffusion in Social Networks.” For the latest news on Dr. Shakrian visit: http://shakarian.net/paulo
Speaking of anti-terrorism, many people would think of guns and bombs, but few would think of Artificial Intelligence. How did you connect these two things together in the first place? Did it have something to do with your personal experience? Were there any Aha moments?
Shakarian: When I was in the Army, I spent over two years in Iraq primarily working on analyzing terrorist and insurgent activities. I was there in 2003–2004 and again in 2006–2007. In terms of computer usage, Iraq was very different from all previous wars that the United States was involved with. I think it was the first conflict where computer systems were regularly used by the soldiers who would go on patrol. So, this resulted in massive amounts of data being collected, which really was not the case prior to the conflict. In 2006 I was on a small team that worked closely with a special Iraqi paramilitary unit that traveled to different parts of the country. In doing so, I saw how many different military units analyzed all of this data being collected in a variety of ways. I had previously earned my undergraduate degree in computer science, so it occurred to me that many of this analysis could be improved and made more efficient with advanced algorithms. After Iraq I was selected as a DARPA Fellow and spent a few months in Virginia understanding what they were doing with a variety of advanced technologies. So, after those experiences, I started graduate school in 2008 to earn my doctorate. I had a few ideas of how artificial intelligence could improve our anti-terrorism practices and graduate school gave me the opportunity to explore these ideas in my research. I continued this research after I earned my Ph.D. while I was faculty at West Point and now that I’m out of the military and faculty here at Arizona State.
At present, what applications of Artificial Intelligence are using in anti-terrorism? Compared to traditional method, what advantages does it have?
Shakarian: There are many ways that AI can help counter-terrorism — as there are many problems in this area that are particularly challenging for humans. For instance, finding temporal patterns in the behavior of a terrorist group (as we did in our recent data-driven study on the Islamic state) helps us understand what leads to certain actions. Understanding how information spreads on social media is also very important as we can understand what messages of extremist groups resonate with the population (i.e. see our recent paper on predicting viral cascades in social media: http://arxiv.org/abs/1508.03371). Understanding terrorist networks and their structure to lead to better targeting strategies is also something that has emerged a lot lately — and I have worked on this topic previously (http://arxiv.org/abs/1211.0709). The key advantage with artificial intelligence is we can better model a terrorist, criminal, or extremist in a way more sophisticated and efficiently than a human can do with paper. For instance, I worked with law-enforcement people who would literally spend weeks to draw a social network of a few dozen criminals to understand their organizational structure — we have software that can handle social networks of millions of people and can do so in a matter of minutes.
Could you please briefly introduce your researches to Chinese readers? What Artificial intelligence technology do you use in your research?
Shakarian: So, I am the director of a research group called the “Cyber-Socio Intelligent Systems (CySIS) Laboratory” at Arizona State University. We use a variety of artificial intelligence techniques including data mining, social network analysis, machine learning, and logic to create software systems to address problems that are relevant to security applications and related areas. Our work has been used to support military, law enforcement, and cyber security applications. Our focus is on understanding problems related to security that can be addressed by advancing the science of artificial intelligence. The idea is based on “use inspired” research — where the research is heavily influenced by how it is applied. So, for instance, in our work with the Chicago Police, we learned a great deal about how the police deal with gang violence and what their significant challenges are. This has directly influence our research directions in the lab as it exposed us to more challenging pursuits directly related to how the work gets applied in real life.
It is impressive that in your paper “Mining for Causal Relationships: A Data-Driven Study of the Islamic State”, you used a dataset consisting of 2200 incidents, finding out some interesting relationships between seemingly unrelated behaviors. How did you use artificial intelligence in the study? What do you plan to do next?
Shakarian: The main intuition in the “Data Driven Study of the Islamic State” paper was that we went beyond correlation and examined temporal relationship based on causality. Why do we need to go beyond correlation? Well, if you have a large amount of information (“big data”) you run the risk of learning correlations that are just coincidental. For instance, the divorce rate in the U.S. state of Maine correlates with the per capita consumption of margarine (a substitute for butter) (see here). However, these two things clearly have nothing to do with each other — but if we didn’t already know that it doesn’t make sense to believe that margarine leads to divorce, then how would we know? One technique to get around this issue (with regard to that example) is to find other potential causes and compare them. In the divorce-margarine example, we could perhaps fist hypothesize the margarine usage leads to divorce, and compare it against other variables (say per capita income) that also seems to correlate and compare them. This is what we did in our analysis of ISIS to become more confident in the relationships that we learned.
Machine learning seems to need large number of data to train. In the fight against terrorism, how to collect more useful data?
Shakarian: In many machine learning approaches, you need a lot of data to learn a mathematical model of the entities (i.e. terror groups) that you wish to analyze. However, we look to use a variety of methods to squeeze the most out of our data. For instance, we used geographic social network analysis to identify socially tight-knight but geographically disperse groups within a terrorist organization (see here). Sometimes, when analyzing terror groups, it is useful to understand aspects about the data you have collected — even if it is not a great deal of information at times.
Your research has spawned several potentially game-changing programs, like SCARE, ORCA, GANG and SNAKE. Could you please introduce the purposes and features of one or two of them and how they are doing right now? Can they predict people’s move and prevent crime?
Shakarian: So, SCARE was software designed to help identify weapons caches for insurgent groups (see here). It was designed to hinder the insurgents planning road side bomb attacks by giving military personnel insight into where such weapons could potentially be hidden. Currently, we are leveraging this and related technologies to help law enforcement agencies locate missing persons — so we hope to have results on this exciting project in the next few months!
ORCA/GANG were software packages used by the Chicago Police for analyzing criminal street gangs. The successor to these efforts is VPRED — violence prediction engine — designed to use information about social structure of the street gangs to predict violent offenders. VPRED was presented this summer at KDD — the top conference on data mining. You can read more at this link.
SNAKE was social media monitoring software also used by law enforcement. It was more experimental in nature, but the lessons we learned in developing it led to ideas that we are now looking to use in our effort to apply social network / social media analysis to extremist groups. We recently won the prestigious DoD Minerva awarad to fund this (see here). Specifically, now we are looking to detect when something “goes viral” on social media well in advance of when it actually does — and we’ve made some good progress on this ( see here and here).
Shaping terror networks was a technique I worked on with a few others at West Point. The idea came from a key problem with targeting the leaders of terror groups. Specifically, these organizations are inherently resilient in that they often can produce new charismatic leaders in a short amount of time. The theory we developed for targeting terror networks to account for this (work I did at West Point with Devon Callahan, Jeff Nielsen, and Tony Johnson) — called “shaping” was designed to affect the terrorist structure to make the organization less resilient to a leadership strike. By this theory, if you “shape” the terror network first — by targeting select individuals — you make the organization more dependent on a limited number of leaders. So, after “shaping” if you then move to eliminate the leaders, you would potentially reduce the effectiveness of the terrorist network. This work gained much interest within the U.S. government — though I do not know how it was finally employed — as ideas like this get evaluated in a highly classified manner.
Recently, as the development of mobile internet and Internet of Things, more and more data can be collected, causing an explosion of data. What changes has Big Data brought to anti-terrorism and anti-crime? How to use them? How to protect privacy?
Shakarian: This is an interesting question, and something that I think about quite often. The challenges brought by technologies such as social media and the Internet of things are significant from both an anti-terror/anti-crime and privacy-preserving perspective. From the anti-terror/anti-crime aspect, the new technologies can lead to more data potentially being collected, but this data is not useful without proper analysis and methods for handling large data sources. Further, technologies such as social media provide criminals, extremists, and others an entirely new way to conduct themselves — and it is challenging to find important information in a timely manner. The protection of privacy is also an important aspect in this as criminals, terrorists, malicious hackers, and the like will use the open-ness of computer platforms to steal people’s identities to raise money, hold computer ransom, and blackmail people. I believe a lot of people do not understand the risks in using many technologies that make much of their information more open than they realize — and makes them susceptible to such attackers. It is important to work to protect one’s privacy against such criminals and extremists and that really starts with better technology education. I think that in the future, we may see terrorist attacks where large numbers of people’s personal information is exposed in a manner that will cause significant economic damage — so educating the population to take measures to protect their personal information is very important.
Your work can be used to predict ISIS’s future moves and prevent tragedy from happening to victims. Do you have any plan in mind to take a step further to prevent people turn into terrorists even before they join ISIS?
Shakarian: Yes, we are very much looking into how to prevent people from joining ISIS — as this is highly important. This is a long term goal of our Minerva effort this (see here). I am also working on a new theory of “inhibiting” the diffusion of a message in social media — and was awarded a grant by the US Air Force for to study this (see here). The idea is to understand what “inhibits” a viral meme in social media — so hopefully we can later engineer ways to stop the spread of extremist propaganda — including recruitment efforts.
What do you think future terrorism and anti-terrorism will look like? What role will AI play in it?
Shakarian: I think the biggest area that will become important in terrorism in the future will be in cyberspace. Some terror groups have adopted hacking techniques already, but I think this is only the beginning. The potential for damage is large, and so this will become an attractive area for terror organizations in the future.
Except AI, what new technology do you think can be used in anti-terrorism in the future? For example, Virtual Reality, Augmented Reality, drones, gene technology, neuroscience…
Shakarian: Going along w. the above answer, I think cyber-security will play a large role in the future.
Elon Musk and Stephen Hawking think AI is dangerous to us, even more dangerous than nuclear weapon, having the potential to destroy human kind. What do you think about that?
Shakarian: Artificial intelligence like many other technologies has the potential to be used for ill purposes and have an adverse effect on humanity — but I don’t see how it has more potential to be harmful than research from a dozen of other disciplines, such as physics, biology, etc. I think AI has just been labelled as “dangerous” by some as there have been some recent advances lately — and this may have been unexpected to some people — especially those outside of the field. That said, such advances should always make us think about long-term implications of science and technology — but I don’t see this as something unique to artificial intelligence.
Einstein once said that, peace cannot be kept by force; it can only be achieved by understanding. What do you think is the most important thing for peace?
Shakarian: I think Einstein is right, and I think understanding populations across the world — their culture — their needs — will ultimately lead to a more peaceful world. The good news is that technology can be a key enabler of such understanding — and allow us to gain a better perspective for others.