Input AI for better Security Output: Anand Naik
Authored by Anand Naik, Co-Founder and CEO, Sequeretek
Anand Naik is co-founder and CEO, Sequretek. He has spent close to 3 decades in the IT Industry in multiple roles from executive management to sales and delivery. He has been instrumental in advising and designing some of the largest and most complex IT environments in India.
The use of artificial intelligence (AI) is here to stay, and given the rising number of security breaches, organisations and governments are increasing the application of artificial intelligence to increase and protect cyber hygiene.
Artificial intelligence takes protection a step further by detecting known and unknown threats, increasing the scale of cyber hygiene through decreased reaction time to breaches. Most organisations and industries face a shortage of human intelligence in combating cyber security, and AI can help in automating these processes for better decision making, increased speed of resolution, improved root cause analysis, and improved prediction.
Particularly for organisations such as pharma and finance where static perimeter controls are no longer the most secure option, automated controls which use programmes such as User Behaviour Analysis may provide higher degree of security especially concerning stolen user credentials and such. A tool such as User Behaviour Analysis (UBA) uses behavioural patterns of a user like login/logout times, designations and access controls, networks and geographies of access to model risk stemming from different users in the organisation. For example, a user who tries to log in to his company account outside normal hours or from a different geography to access sensitive files may trigger an alarm for the security personnel monitoring the organisation’s security. The use of AI in cyber security has to be done in collaboration with human intelligence, and in most cases it ends up strengthening the skill of security personnel for better monitoring and protection. UBA is especially useful in industries with a significant threat of insider attacks.
When there is a breach, it is critical to discover it at the soonest and also be able to separate the relevant data from the white noise for tackling the breach. What happens is that most organisations tend to spend a lot of time combating false positives during the breach, which makes the breach even more critical as it still has time to penetrate the secure networks and steal more data. Newer models such as APT and anomaly detection help to distinguish real threats from the noise. We live in the era of Big Data, which contains structured and unstructured data as well as dirty or erroneous data. As a consequence, traditional Machine Learning algorithms are being modified to increase security and detect a higher percentage of sanitized data. This includes a range of supervised and unsupervised learning to detect a larger range of threats. While supervised learning is your basic input-output model, unsupervised learning uses tools such as clustering and associating for better predictive and diagnostic analysis. Organisations now have the choice of a range of AI and ML models, and can choose the best algorithm for their data models to maximise security.