قالب وردپرس درنا توس
Home / Technology / How Microsoft, Google uses AI to fight hackers

How Microsoft, Google uses AI to fight hackers



Last year, Microsoft's Azure security team discovered suspicious activity in the use of a large reseller cloud computing: One of the company's administrators, who usually log in from New York, attempted to recover from Romania. And no, the administrator was not on vacation. An attacker had broken in.

Microsoft quickly alerted its customer, and the attack was forged before the intruder was too far.

Krit one for a new generation artificial intelligent software that adapts to hackers' ever-evolving tactics. Microsoft, Google, Amazon.com, and various startups are moved away from just using older "rule-based" technology designed to respond to specific types of intrusion and deployment of machine learning algorithms that crush huge amounts of logon, behavior and past attacks data to freeze out and stop hackers.

"Machine learning is a very powerful security technique – it is dynamic, while rule-based systems are very rigid," said Dawn Song, University of California at the Berkeley Artificial Intelligence Research Lab. "It's a very manual intensive process to change them, while machine learning is automated, dynamic and you can retrain it easily."

Hackers are familiar, of course, so For example, they can exploit machine learning to create fresh mischief and overwhelm the new defenses, for example, to find out how companies train their systems and use the data to avoid or destroy algorithms, and the big firing companies are painfully aware that the enemy is a moving goals, but claim that the new technology will help to tilt the balance for the good of the good.

"We will see a better ability identifying threats earlier in the attack cycle and thus reducing the total amount of damage and speeding up restoring systems to a desirable state, says Amazon, chief information officer Stephen Schmidt. He acknowledges that it is impossible to stop all intrusions, but says his industry will "step up to protect systems and make it more difficult for attackers."

Blunter Instruments

Before machine learning, security groups used the layers of blinding instruments. For example, if someone based on head office tried to log in from an unknown location, they were blocked entry. Or spam emails with different misspellings of the word "Viagra" were blocked. Such systems often work.

But they also flag many legitimate users – all of whom prevented using their credit card while they know abroad. A Microsoft system designed to protect customers from false logins had a 2.8% false positive, according to Azure, chief engineer Mark Russinovich. It may not sound like much, but was considered unacceptable since Microsoft's major customers can generate billions of logins.

To do a better job of finding out who is legitimate and who is not, Microsoft learns technology from the data of each company using It adapts the security to the client's typical online behavior and history. Since the deployment of the service, the company has managed to bring down the false positive rate to 0.001%. This is the system that left the intruder in Romania.

Training these security algorithms falls to people like Ram Shankar Siva Kumar, a Microsoft leader who goes under the title Data Cowboy. Siva Kumar came to Microsoft six years ago from Carnegie Mellon after accepting a second interview interview because his sister was a fan of Gray's Anatomy the medical drama set in Seattle. He manages a team of about 18 engineers who develop machine learning algorithms, and therefore ensures that they are clear and fast enough to prevent hackers and work seamlessly with the software systems of companies paying big money for Microsoft cloud services.

Siva Kumar is one of those who gets the call when algorithms detect an attack. He has been awakened in the middle of the night, only to discover that Microsoft's internal "red team" of hackers was responsible. (They bought him cake to compensate for lost sleep.)

The challenge is frightening. Millions of people log on to Google's Gmail every day alone. "The amount of data we need to look at to make sure this is you or a fraudster is growing at a rate that is too great for people to write rules one by one," says Mark Risher, a product MD that helps prevent attacks on Google customers.

Google now checks for security breaches even after a user has signed in, which is useful for nab hackers who are basically similar to real users. With machine learning capable of analyzing many different pieces of data, it is no longer a matter of a simple yes or no to capture unauthorized logins. Instead, Google monitors various aspects of behavior through a user's session. Someone who looks legit can later show signs they aren't as they say they are, so Google's software supports them with enough time to prevent further damage.

In addition to using machine learning to secure their own networks and cloud services, Amazon and Microsoft provide the technology to customers. The Amazon Macie service uses machine learning to find sensitive data among corporate information from customers such as Netflix, and then sees who can access it and when, warns the company of suspicious activity. Amazon's GuardDuty monitors customer systems for malicious or unauthorized activity. Many times the service discovers that employees do things they shouldn't have – for example, miningbitcoin at work.

CxO spamming

The Dutch insurance company NN Group uses Microsoft's Advanced Threat Protection to manage access to its 27,000 workers and related partners while keeping everyone else out. Earlier this year, Wilco Jansen, the company's head of workplace services, showed employees a new feature in Microsoft's Office Cloud software that blocks so-called CxO spamming, whereby spammers make a top manager and instruct the recipient to transfer funds or share personal information.

Ninety minutes after the demonstration, the Security Operation Center called to report that someone had attempted the exact attack on the NN Group CEO. "We were like" oh, this feature could already have prevented this from happening, "says Jansen." We need to be constantly on guard and these tools help us see things we can't follow manually. "

Machine learning security systems do not work in all cases, especially when there is not enough data to train them, and researchers and businesses are always concerned that they can be exploited by hackers.

For example, they can mimic users' activity on film algorithms as monitors For typical behaviors or hackers can tamper with the data used to train the algorithm and banish it to their own ends – so-called poisoning – so it is so important for companies to keep their algorithmic criteria secret and change the formulas regularly, says Battista Biggio, professor at the University of Cagliari's Pattern Recognition and Applications Lab in Sardinia, Italy.

So far, these threats are more in research papers than in reality n. But it is likely to change As Biggio wrote in a paper last year: "Security is a weapon race, and the safety of machine learning and pattern recognition systems is not an exception." – Reported by Dina Bass, (c) 2018 Bloomberg LP


Source link