Automotive Cybersecurity – An Interview with Professor Siraj Ahmed Shaikh

Automotive Cybersecurity – An Interview with Professor Siraj Ahmed Shaikh

Reported by Kate McNeil, PaCCS Communications Officer

For the final post in our October Cyber Series, PaCCS Communications Officer Kate McNeil sat down with Prof Siraj Ahmed Shaikh, Professor at Coventry University’s Institute for Future Transport and Cities to discuss automotive cybersecurity. Professor Shaikh has previously worked with PaCCS on PaCCS/KTN Policy Briefing on “Innovation Challenges in Cybersecurity”.

This interview has been edited and condensed for clarity and concision.

Kate McNeil: Thank you for taking the time to chat today! The last time PaCCS spoke with you in 2016, you had just launched a cybersecurity start-up, CyberOwl. Can you update us on what’s happened with it since?

Professor Shaikh: CyberOwl does advanced risk analytics, and we’re very much focused on the maritime sector and critical national infrastructure. It has been a remarkable journey. We’ve scaled up from being part of a GCHQ accelerator to working out of a facility in London, where we’re working to deliver proof of concept for the maritime sector. We’ve also expanded overseas, with work in Greece and Singapore.

This year, Forrester listed us as one of the new emerging cybersecurity companies for critical infrastructure, in their report on New Tech: Industrial Control Systems (ICS) Security Solutions, Q1 2019, which is exciting. With that said, we’re still in the early stages of our growth, and are working energetically to make it commercially viable.

You are also, of course, an academic. On the academic side, what does your research focus on?

Over the last ten years or so, one of my key areas of focus has been cyber-physical systems. I’ve worked on things like cars and automotive systems, marine vessels, and IoT devices. My work examines how traditional cybersecurity models and tools measure up to the recent challenges posed by these cyber-physical systems. I also look for key gaps in current practice where we may need to develop new approaches, models, tools, or techniques to meet industry needs.

A lot of my colleagues focus on hacking into or breaking cyber-physical systems, which is important for demonstrating how flawed these systems are. However, that’s only one piece of the work that needs to be done to achieve secure systems. Here at Coventry, our research group focuses on engineering secure systems. We look at use-driven research, which entails working very closely with industry partners on real-world problems.

Can you give me an example of an exciting problem you’ve been working on in that space?

We’re building an IoT Security Testbed with a consortium of industry partners. The project will formally begin in December this year. It’s a breakthrough demonstrator that does deep level monitoring and security analytics and is focused on automotive cybersecurity problems. That project is being done in conjunction with a leading electronics company, UltraSoC, based in Cambridge, which has technology to look at low level monitoring. I’m working with them to solve various problems, one of which for example is measuring resilience in such environments.

You’ve done a lot of work in the automotive cybersecurity space, with papers touching upon issues including security assurance and testing, and you lead the Systems Security Group involved in automotive and transport security.  What is the core focus of your work there?

A lot of my work in automotive security focuses on how we test systems, whether that is the full automotive systems or their components. Testing for security is an open problem for research; and testing for security in the automotive sector is an even deeper problem. So that’s been a key focus for me.

Alongside that we’re also looking at system level study and analysis of cybersecurity problems. This means not just looking at vehicles on their own, but rather also looking at the people, organizations, and infrastructure they are connected to. In that area, we’re really focused on modelling efforts. We’re trying to represent the complexities of the real world in a model that lends itself to more analysis, more assurance, and more reasoning. As an example, some group colleagues have worked on evaluation ontologies for connected vehicle security assurance.

How has increasing research into autonomous cars changed the way in which you have to work or think about security and risk in the automotive space?   

A lot of the technologies we’re working with in risk are, at their core, about protecting software. We’re concerned with algorithmic implementation. There’s no doubt that the roadmap for the development of autonomous vehicles has other key challenges in terms of the machine learning and AI that is used for sensors, data activity, and safety. However, security a fundamental component falling under the umbrella of these problems because everything else is doing through a layer of software somewhere. This means that the trust we talk about today in terms of safety and the perception of autonomous vehicles could be violated if these vehicles are insecure. Safety and security go hand in hand.

Do you think that good cybersecurity approaches in this area are about creating the conditions for trust?

The work we do feeds into technical assurance. It’s helping supply chains make sure that systems are standing up to security challenges. The wider impact is to make sure that as technology matures, it matures alongside the know-how to build more secure systems. Consumers benefit from that through key members of the supply chain they can trust.

You’ve done work in automotive security in both the civilian and military sectors. Were there any challenges in the defence space of automotive security which were unique?

I’ve working with both established companies that seek to provide consumer assurance, and with smaller companies who can bring in breakthrough technologies to help solve problems in the cybersecurity space. For example, we have worked with CryptaLabs in London to develop automotive applications for a quantum random number generator developed for cybersecurity.

Work on passenger vehicles is generally more open ended and is driven by technological trends emerging in industry. That research is multidisciplinary and pays a lot of attention to the consumer. Passenger vehicle manufacturers are also very cost sensitive, and the cost to the driver is very important as well. Meanwhile, the work done in a military context is either very well defined from the outside or is scoped by the clients from inside the defence world. There is insight to be had from both sides of the military-civilian divide, and my role is to make sure that we become a trusted source of insight and analysis on both ends of the spectrum.

Over the course of your career, you’ve engaged in a lot of multidisciplinary work with colleagues in the social sciences and international relations. You have also worked to engage with those in the policymaking community. What’s that experience been like, and do you think there are things that people working on the hard sciences side of cybersecurity could be doing to make their work more accessible?

I’ve been interested in the policy side of things over my career because I’m mindful that our community of scientists needs to translate findings and insight from our work into wider policy contexts.

Policy in this area traditionally has happened through regulation, standards, and best practices in heavy technological industries. Security is somewhat similar insofar as we need more standards, however the core fundamentals of cybersecurity are not straightforward in the same way.

Cybersecurity is a human problem, a technology problem, and an economics problem, all mixed together. At a scientific level, there’s now a push to make some aspects of security more measurable, and to devise different measuring methods, including readiness and capacity assessments. However, I think unless we start accessing more qualitatively, those working in policy will struggle to distinguish between good and bad security options, which will make policy incentives and mechanisms less effective.

In this vein, we developed an evidence quality model in 2018, which tries to help people distinguish between the kinds of quality criteria they can apply to evidence for policymaking. We know there are problems today with deep fakes and fake news, and we know that policymakers are becoming social media savvy. So, we see problems with sometimes even politicians ending up propagating fake news. And there’s a real problem there. I’m not suggesting policymakers don’t understand evidence, but that rather in the context of cybersecurity, we need more tools and it remains an important problem our society needs to do more work on.

What are some of the examples of your current work in the policy space?

I’ve written policy briefs for the Government Office for Science, and I ran a project with colleagues at UCL which looks at the evidence base for cybersecurity.  We’ve been working on a scenario-based gains approach to setting responses to cybersecurity related incidents. This has led to another project looking at corporate boards and cyber readiness that has just started.


This article is the final post in the PaCCS in focus cyber series, which is running throughout the month of October in concurrence with European Cyber Security Month. Previous posts in this series can be found here. We’ll be back next Tuesday to discuss conflict negotiation and the role of empathy in international relations with University of Glasgow senior lecturer Dr. Naomi Head.