Ido Safruti, Co-founder and CTO of PerimeterX, and Kim DeCarlis, CMO at PerimeterX, discuss how merchants protect their websites and web apps from the entire lifecycle of automated fraud attacks online.
Bradley Chalupski: In today’s competitive e-commerce environment, it’s never been more important to earn and maintain the trust of your customers. Merchant Fraud Journal’s “To Catch a Fraudster Podcast” is supported by Sift – the leader in digital trust and safety. Sift empowers companies to stop fraud and grow without risk. Visit sift.com/assessment to schedule a consultation with Sift’s trust and safety architects – industry experts who have decades of fraud-fighting experience at companies like Facebook, Square, and Google. They’ll help create a custom plan for your business with an emphasis on technology, organizational structure, and process. Visit sift.com/assessment today.
Bradley Chalupski: Ido and Kim, thanks so much for joining us.
Kim DeCarlis: Great to be here.
Bradley Chalupski: We’re here with Ido Safruti, CTO and co-founder; and Kim DeCarlis, CMO at PerimeterX. I’ll let both of you introduce yourselves and PerimeterX, and then we’ll jump right in.
Ido Safruti: I’m Ido Safruti. I’m the CTO and co-founder at PerimeterX. Been working most of my adult career in combination of cybersecurity and web infrastructure. And we’re also architects for PerimeterX needs, providing highly scalable solution for the modern web applications.
Kim DeCarlis: And I’m Kim DeCarlis. I’m a long-time sales and marketing professional in everything from virtualization to, most recently, security. And I really love the story that we have at PerimeterX about how we help customers protect their websites and web apps from a whole lifecycle of attacks and stop automated fraud.
Bradley Chalupski: I thought you’re going to say you’re a longtime fan of Merchant Fraud Journal, but you just got my hopes up, but we’ll let it go.
Kim DeCarlis: That too, Bradley, definitely a fan.
Bradley Chalupski: So, we really appreciate you guys being on the program here. We’re very excited. This topic of security is something that fraud prevention has and is moving more and more in that direction over the years. I remember when I first got into the insurance industry, it was exclusively really chargeback-oriented. And over the years, chargebacks have become just one piece of a much larger security puzzle. So, we really appreciate both of you taking the time to come on here and share insights with us – so, thank you very much. As we like to say here: let’s kick it off with the first crazy fraud story.
Kim DeCarlis: I think Ido will give us an awesome story. I think that the important thing, just to build on what you were saying, is to make sure that we’re all on the same page about fraud. Because as you talked about, it used to be really about authorization fraud, people stealing credit cards and subjecting merchants to chargebacks, and so on. And as you hinted, I think we’re seeing the fraud world change. And really what we’re talking about more is authentication. And the question we like to toss out is: “Are you who you say you are? And are you doing what you should be doing?” And that’s the angle that we look at fraud from more around authentication versus authorization of a transaction.
Bradley Chalupski: So, if you’re a merchant, what would you say, in the practical day-to-day details, what does that mindset shift entail?
Kim DeCarlis: Ido maybe you can talk about that a little bit. One of the things that we have coined at PerimeterX, Bradley, is this term that we call the “post-login wasteland”. Historically, once you have logged into the site; if you’ve logged in using a valid username-password pair, you get to do whatever it is you can do on this site. It’s almost like giving somebody the keys to your house and you can do anything. But the question is, should those people even have the keys? So, that’s something that I know Ido has got lots of good thoughts about and good examples about that post-login wasteland and the need for merchants to pay attention to it.
Ido Safruti: I always tell customers, or whenever I’m asked to provide advice on something when looking at security. Instead of looking for the technology and just looking for signals, the best way to protect your application is to understand what will fraudster go after, and then put your guards around there to disrupt them when trying to do their malicious activity. What steps will they take in order to monetize from your site? What value does a user own on the site that the attacker or fraudster can go after and look for any anomalies or any tration that can happen there? And as Kim said, around the post-login wasteland: Never assume anything. Don’t assume that if someone managed to have the right credential, then from that point on, everything is open to that session because that session might be stolen, that user might have a malware on his device. So, continuously look for the actions and look that it make sense, does it behave differently? One example– interesting and actually pretty nice example we had, that we presented with a customer of ours. A situation they faced a few years ago – I think it was in 2016, to show you that fraud is not something new – was with Wix; they are a large site where you can publish your website or even a retail store, and you can build your sites on it. And what we’ve seen as part of the protections we provide is we’re protecting their login, their account creation and account login for site managers or people who own those assets and manage them. And they’ve seen a strange spike of new accounts being created, much higher than the average. They were trying to figure out; “How is it that we’re not saying that?” And what we identified is there were two ways to create a login and account on that site. One of it is the regular where you just put in username, password, provide all the information that is required to open an account. The other one is, to make it even simpler, why won’t you use some syndicated login or a social login: LinkedIn, Google, Facebook. When doing that, they assume that obviously Facebook or Google, for instance, validate that you have the right credentials, then obviously it is you, and automatically creating their account, not passing any validation through us.
Ido Safruti: So, that was a path that we, as PerimeterX, were blind to and they trusted. And this ended up after diving into this hole and investigating during reverse engineering. This is exactly where we found what the attackers were doing. And that’s a very interesting approach. The attacker, what they were doing is they were distributing a malware, which was a browser extension. Now, a browser extension, once a user was installing it, would wait there silently and pretended to do something else – not, obviously, a malware. And it was waiting silently for the user to log in or to do something on Facebook. And once the user logged in to Facebook and there was a session active with Facebook authentication, it would open in the back of the browser in the hidden frame. Basically, account creation on Wix generates, using the Facebook token on the user session, and creates an account. The second thing it will do on that account, because it’s a website generating account, it will generate the site to distribute the malware – a unique site because it was specific for the user, so it won’t look like one link that is viral, that is easy to detect as spam. Upload a link to download the extension. And then, obviously, utilize the fact that the user is on Facebook and send a message to visit the school site to all their friends in their friends list. That is one way of doing that. The malware even got more saucy in that they say, “Okay, the user may also be logged in to Google.” Once they identify with the Google token, they use that and upvoted this extension on the Chrome store so that it will look legit, and with a lot of good reviews and comments. And this is how it would continue to distribute with users.
Bradley Chalupski: I was going to ask you two questions. And I think you touched on both of them a little bit. The one is: How do you merchants identify this beforehand? And I’m seeing that that’s going to be really difficult because, obviously, fraudsters are already hacking into social proof and things of that nature that you would usually use to verify. And two: How are you finding this? What are you doing that you were able to work with Wix and reverse engineer this process to find this connection?
Ido Safruti: So, this exactly goes back to the original principle: Don’t assume and don’t trust things just because they are. And you need to go back and verify – when you’re using a specific technology – what solution is it trying to solve? So, when I’m relying on Facebook login, for instance, or for on a new user using their Google ID, what I know is that Google will verify that the credentials and the account are valid, and it will make it easier for users who already have an identity to log in. What it does not say is this session – specifically, used by a bot or by a malicious actor who got control over that account – are they behaving legitimately or not? What they’re saying is that it’s valid credentials and that there is such an account. So, instead of assuming and letting them free pass, making sure that even when creating an account or when logging in using – for instance, such a syndicated login – that you do not control, look for the signals before and after, and what actions are users doing. When you create a new account, you may want to limit the tasks that a user can do when they just create an account to mitigate the risk. Think of it as when you just open a bank account, or when you get a credit card; initially, your credit will be low because the bank needs to add to the risk of maybe you’re not who you say you are, and they don’t want to be at fault. So, these are exactly the actions that you want to track, post the login. Obviously, if you see someone logging in, and within seconds, creating a site, publishing it, and that’s it. Or logging into an account and immediately changing the home address and the details of the user, or immediately doing something that is not very typical for a user to do – this is where you may add your protection and look into the account and do the specific actions that users are doing.
Bradley Chalupski: So, is malware the usual vector here? You’re talking about a very complex injection based on a fake widget inside of the browser. Is that the normal M.O. that you’re seeing from fraudsters? Or are there other examples that you can give of ways that they’re pulling off these kinds of attacks?
Ido Safruti: So, I would say that in many cases, malware has the cool story; it’s not necessarily the most prominent story or the most popular way of committing fraud. There are simpler ways on one hand. I will also say that one of the things that we identified in the last couple of years is the commoditization of custom-built malware. And with the evolution of open framework, malicious framework, the dark web, the ability for fraudsters and malicious actors to collaborate, it is becoming easier now more than ever to create and to repurpose an existing malware to do tasks on my behalf if I want to target a specific task on a specific site, and we’re seeing more of that. So, definitely, this is becoming more prominent. But in many cases, fraud could be done by a botnet. So, I don’t necessarily need to leverage or break into users’ accounts or install malware on their devices, I may utilize computation resources to do something at scale if I want to scrape content, if I want to get an advantage when buying something of limited capacity, or if I have a dictionary, if I got a database of username and password somewhere from the dark web and now I just want to test these credentials and validate them so that I can get control and ownership of specific accounts in the sites – this is something I can totally do by buying compute power and running processes. And these are extremely popular attacks that we’re seeing at the very high volume.
Kim DeCarlis: If I can add some color to that, one of the things that we talk about, Bradley, is something we call the “web attack lifecycle.” And that is kind of the interrelated nature of attacks that happen out there. And one of them, we talked about it as the theft, the validation, and then they use stolen credentials and account information. So, Ido has done a great job of talking about malware as a venue, and using compute cycles and botnets, and so on. But one of the ways that people also get access is through lists of stolen username and password credentials that are available for purchase that can be bought for pennies on the million credential pairs for not very much money that then are used to– Some of them are validated. And the ones that are found to work are then used to try to commit some of that fraud. So, a lot of times the credentials are really old, that was from a Yahoo breach 10 years ago. Everybody has changed their password since then. But since most users, on average, reuse passwords, and they only have maybe six passwords that they use across their whole set of applications on which they rely; there’s a lot of password reuse. So, still getting those credentials and then using them, as Ido talked about, to validly login to a site; that happens a lot. And I think the key thing for merchants is, as Ido says: Don’t always assume that just because they’ve logged in with a valid credential pair, that they are who they say they are. There need to be further downstream checks before fraud can happen. So, setting flags on too many password changes, setting flags on shipping address changes, setting flags on things that would indicate that somebody is trying to take over that account to commit fraud is something that we think merchants need to pay closer attention to.
Bradley Chalupski: Absolutely. Ido, before we get off this point, I want to ask: You gave a little bit of a list, Kim now gave a little bit of a list – is there anything that hasn’t been said yet about what merchants should be looking for once somebody has logged in?
Ido Safruti: Yeah, the obvious fraud profiles where if a user usually log– This is an example that is given because it’s an easy one to understand. But there are many more complex profiles, and this is where fraud tools would look like with any anomaly in the user behavior. This user always logs in from the United States, and suddenly you see a transaction coming from a different country; that could raise a flag. Or especially, if you see that user still being active from the United States while another session of the same user is coming from a different country; this is even more suspicious. So, just looking for suspicious activities. There are also indicators of the actual login attempt itself. So, what we’re referring to and the examples that Kim was giving on what you should look for as indication to the fact that an account is actively taken over, there are the signals of how do you know that someone is trying a credential stuffing attack on your site? One clear indication is if I have database breach credentials – let’s say I have a million of that – the majority of them will not have an account on your site. And the ones that do have an account, won’t necessarily be still using the same credentials. That means that when I’m committing such a campaign, I’m expecting to see a very high rate of failures of “user does not exist” or “password and user do not match.” And if you see suddenly a spike of failed login attempts, that means that someone is targeting you. It won’t be surprising because, basically, any retailer that we know is constantly being a target for that. So, my expectation is that anyone who would look on their logs will see that there are thousands or millions of failed login attempts per day that are done systematically by attackers.
Bradley Chalupski: Amazing. All right, next story.
Ido Safruti: Again, as I said, most of the cool stories are relating to malware. This is an example to what I mentioned about the trend that we’re seeing on the custom-built malware, and the ease of repurposing existing malwares and stealers. And what we’ve seen on one of our customers, which is a marketplace, where people can offer services as well as consume services from others. So, you have power users where they offer the service as well as consumers. We saw accounts being taken over where we found, and the customer was also identifying the indication of breach. And that also brings back to what do you do once you identify that an account has been breached? Do you force a user to reset the password? Do you just keep them off the service? Do you block the account? Do you try to reach out? There’s no one good answer. It really depends on the site. It depends on the type of account and what type of damage and what type of breach are you indicating. So, forcing, for instance, a third-party two-factor authentication or multi-factor authentication is one way to validate that, but it’s not always available. What we’ve seen there is that whenever they were forcing the user to reset their password – doing a reset and sending an activation link to the user’s email – the user was reset, they were doing that. And within seconds, that account was taken over again. And that felt extremely frustrating for them. And what we’ve seen is, actually, an attacker was targeting specifically their power users, their sellers through phishing emails and all kinds of malicious emails containing this malware would try to get these users infected with this malware. That malware was purposely built by that attacker for that website. It’s not targeting any other website. It’s not like a generic malware that he’s trying to harvest credentials or money or Bitcoin or whatever it is. But it was built on some sort of an open source for a known platform that is called Stealer, which basically sits there quietly and steals information that the user is feeding. It may steal crypto wallet, it may steal your credential, it may steal a bunch of things.
Ido Safruti: They used it to specifically steal session cookies. And instead of stealing the credential themselves, the malware was waiting for the user to authenticate. Once it was doing that, it was stealing the session cookie; basically, that validates that the user can access the service, will send it over to the command and control. And from that point, they would establish a parallel session. But because they were using the authentication token that they were getting by stealing the cookies from the server side, it was an authenticated session – so they could go and change details about their account, they could go and move funds around from this account to other accounts. The solution, in that case obviously, is to block this user until they released the malware. But anything that you would validate to the user, because it’s something that rides on the actual user, the user will pass all tests that you can challenge; the user will pass multi-factor authentication; the user will pass reset-password that is sent to the email because that user controls the email. And that was a different type of behavior and a new realization for their fraud team; “Okay, there are these kinds of situations and we need to deal with them.” We identified it by realizing that the same authentication token, basically, was used in parallel from two different machines in two different countries, which is not something that’s supposed to ever happen: a session is unique.
Bradley Chalupski: I have a question about the psychology of all of this. You mentioned that there was some frustration because people were changing their passwords, and they were still getting the same results. So that seems highly illogical that someone would be able to steal your password so quickly. So, when you are dealing with customers in the field, take me through some of the mindset shifts or emotional tangents that people go on when they have these kinds of attacks. How do you recommend people keep a level head here and work through the problem scientifically, as opposed to just getting very frustrated and maybe making decisions that would either cut off a customer entirely or maybe continue to introduce more friction or overcompensate and increase false decline rates, because they’re just smashing buttons, as we would say, trying to figure out something that will work, because they’re upset, because it doesn’t make sense to them? How do you help people in that position deal with that and find a good logical scientific solution?
Ido Safruti: I think specifically in retail, because the high sensitivity to user experience overall and to reducing friction to your users. Because if performing a task on your store or application will be more complex or too complex, there will be churn and people will complete the task somewhere else with less friction. So, I think one of the concerns and one of their frustration is how do I do that without introducing more friction to existing and legitimate users? And that part of constantly forcing a user; for instance, if the answer is constantly, as you said, smashing buttons and enforcing resets and doing that blindly, adds friction and does not solve the problem. And this is where the fraud team or the security team should always take the analytical research with the research hat on and look at it as an investigation. What are the facts that you know? What is their hypothesis? What will help you prove that and look for these signals? Don’t let a good hypothesis blind you into convincing yourself that these are things and ignore the truth and ignore the facts. Keep collecting facts to prove that. And this is not then going back to the drawing board, going back to what are the basics. What do you know for sure? What are the indications and what possible explanation could draw so that you can go and collect additional signals either to understand what’s happening or to prove your new hypotheses?
Bradley Chalupski: And that feeds into another question that I had off of this story, which is: The fraudsters now are so sophisticated, and they move so quickly, and it’s really not just a buzzword to say that. Hearing this, this is the first I’ve ever heard of this, and it’s really quite ingenious to start stealing cookies and running parallel sessions and all these types of very sophisticated, very well-thought-out strategies – not just the tactical execution, but very cogent and effective strategies. How can merchants go about trying to, preemptively– I don’t know if prevent is the right word, but try to think about how they would be vulnerable. You had mentioned this at the beginning that you really need to think about how you can be vulnerable, where your systems could be tested. Do you have a framework or a system that you can suggest for how people go about that? Because it’s very difficult. When you have a system out in the field, I would think you’re of two minds. One is, obviously, everything is vulnerable, is one mind. And then the other mind is, “But my system is very well designed and we’ve accounted for everything. And so it’s not going to be hard.” Those two contradictory thoughts, I would think, are in the mind of every fraud-fighting team. What is a systematic way for them to get out of that mindset and be self-critical?
Ido Safruti: I think one way that even us within our organization are trying to address that is by having a red team, where this team is not part of the product organization, so to speak, and they’re taking an attacker approach to the system and trying to see how can we break it. Trying to look at it from different angles, and not just assume that “Oh, I know how it is designed.” And “Here is a thought of how I can break in.” And automatically, if you’re too much into the implementation also, you’ll say, “Oh, but we thought about it and we’ll design for that,” versus actually trying. So, having a red team is a good thing. But instead of having a large red team, one exercise that just keeps this kind of benefit and can foster that into the organization is having rotations of engineers, and other researchers or people from the organization rotate in the red team for a short period of time or for a project; being guided there, being part of that, getting that point of view for a short period of time, and then going back to doing what they do, give them a different approach. So, it gives the resources to the red team, which you will [27:36 inaudible] but it also helps infect the entire organization with the state of mind of looking at that and coming at it from a different angle.
Bradley Chalupski: All right, awesome. If you guys have time, let’s have one more story.
Kim DeCarlis: Yeah, we do. And we’d love to share a story of a customer of ours that offers online bookkeeping, payroll, accounting services for small and medium-sized customers. Actually, they have about half a million customers. And they’re processing really sensitive customer data. And they were worried about a cyber-criminal trying to take over accounts and commit fraud. And were really seeing a ton of fraudulent visitors to the site that needed to be protected. And I know, Ido, you’ve got some interesting colors about this customer.
Ido Safruti: Just in the first couple of weeks, we saw when you think of more of an enterprise-type account for enterprise users, you don’t expect a high amount of traffic or a high amount of activity. We’ve been seeing a lot of malicious pageviews, so to speak, malicious visits that are counted as about a sixth of the overall visits or views on the applications and on the site, ended up being blocked by us trying to figure out what type of unauthorized access and why would people try to go over that; the insights and once seeing the type of attack the type of activities that were identified as anomalous and that were blocked, customer could easily see what was causing that. And it also reduced a lot of the friction that was added until then to their security team for getting all kinds of alerts in the system from unnecessary activities or all kinds of strange alert that were coming on activities. Obviously, anyone who is managing bookkeeping processing has a lot of very sensitive data that can be a target for all these kinds of activities that may take over accounts. Because once you get in there, obviously a lot of personal [29:51 inaudible] firewall information that you can then use to either create account credit applications or just get financial information by looking at these anomalies; by preventing these attacks and interrupting the attacker lifecycle by adding friction into their activities, they could basically integrate that and have clean post-login traffic, and sort of relieve their security team to do other tasks and to protect them from malware and other types of attacks while securing the front door of their web application from all kinds of malicious players trying to take over accounts.
Bradley Chalupski: I do have one question there, which is, how much friction do you find is too much friction for fraudsters? Because we’ve all heard these stories about how they’re always going to go to the easier place – like water will always find the path of least resistance to where it’s going. But at the same time, being that they’re humans and not computers, I would have to think that if they get the idea in their head that they’re going to perform a certain type of attack against a certain website or against certain clientele, they’re going to be committed to it past what would necessarily be considered optimal in a vacuum. So, what do you find is the threshold? Do you have any empirical? If not, that’s fine. But any anecdotal evidence as to how much you have to impede a fraudster before they’ll go elsewhere.
Ido Safruti: A really good question. And in many cases, this is exactly where I lead. I think, in the space that we’re at, it depends on the fraudster will be able to invest, or will be willing to invest, resources in a certain amount that is causing it the net positive return on investment. So, if you’re a site whereby committing fraud, I can get $1,000 potentially, that I can get as an attacker $1,000 worth of fraud or more money or merchandise or something that is worth $1,000. I’d be willing to invest quite a lot to penetrate into a specific account. If I can get $10 per account, then the level of investment will be lower. And this is one thing that is important. In retail, usually, it’s not the type of security, country-level, country-backed cyber attack that will go there and will put whatever resources they want because this is important for them. This is all about return on investment. And if you’re adding just a little bit more friction than your competitors, then the attackers may find some other place to go. It’s sort of like the meme about “You don’t need to run faster than the bear, you need to run faster than the slowest runner.” So, in that case, you’d need to add friction to disincentivize the attackers. You need to understand how much value can the attacker achieve by committing fraud, and just by making it more expensive to them. And if it’s by forcing them to spend more computational resources, more development, more efforts, anything that would be more expensive for them that dips the scale and make it less interesting for them when they would go search to make fraud in some other places.
Bradley Chalupski: Great advice. I do want to add one anecdote came to my mind from a previous podcast that we did with Alexander Hall, who’s well known now in the community. And he was saying when he was operating as a fraudster actively, that there was one company– I can’t remember who it was. I think he actually gave them a shoutout. It was either on one of our podcasts or one of our webinars, where he was saying he never managed to crack their security. And he had spent inordinate amounts of time trying to do it just because it became a thing, that he had cracked so many other people’s and he couldn’t figure out how to crack there’s. It became a mini obsession. He was determined to figure out how to do it. And he said he never did. I forget which company it was. I think he gave them a shoutout. So, it is definitely true that people will continue to try and you really have to make it painful for them. I really liked what you said about thinking rationally about the dollar amount value and how much the average person is really going to invest in trying to breach your defenses. I think that’s great.
Kim DeCarlis: That’s a really important point, Bradley, just to drive that home as one of the things is just changing the economics for the attacker and making it way more expensive and painful to try to go after a site that our company protects. That’s really how I would send that whole idea up.
Bradley Chalupski: For sure. This has been super, super insightful. I really can’t thank you guys enough. This has been just a wealth of actionable information, which is what we’re all about here. Before we let you go, I know that you recently released a report that I think would be of great interest to our audience on some of these issues that touch on a lot of these issues. So, I would love if you could just give us a quick overview of that report, let people know where they can find it, and then we’ll sign off.
Kim DeCarlis: Thanks for that opportunity, Bradley. We do a report every year; we call it the Automated Fraud Benchmark Report. We’ve just released the report which looks at what happened in the last calendar year. And we saw some really interesting stats: scraping attacks were up 240%; credit attacks were up 111%; even though overall traffic to sites was down a little bit as people started shopping in-store more than just online, bot attacks increased 106%. So, attackers are still going after sites, still trying to commit fraud, make money on them. And I would encourage your readers to check out PerimeterX.com and the Automated Fraud Benchmark Report, it’s highlighted on the homepage. We’d love for them to download it and hopefully get some learnings from the research that we’ve done and the recommendations that we’ve made for how to stop those attacks and be better secured in their online presence.
Bradley Chalupski: We cannot thank you enough. This has been truly insightful. Thank you so much, Ido and Kim, for taking the time to come on the podcast and speak with our listeners, and you’re welcome back anytime. We hope to have you again in the future.
Ido Safruti: Thank you.
Kim DeCarlis: Greta. Appreciate it.