Open source and Preventing Cheating

but then people won’t compete as much…

[quote]I can ignore cheat analysis on anything other than the no. 1 hiscore.
[/quote]
What if the no.1 and no.2 highscores are fake?

Oh, wait. Never mind… -move along- ::slight_smile:

By the way, in another bit of canny business nous that my wife has demonstrated which reminds me once again I didn’t just marry her coz she was a babe, we have removed the online hiscore display from within the game and simply got a button to open a browser to our online hiscores webpage.

But, you’re thinking, this sucks, why not display the scores in the game?

Because we want to make sure there’s a steady stream of visitors coming back to our site to see what’s up, that’s why.

Even pirates who crack the game will end up there and who knows, maybe they’ll buy our next game.

She’s sharp, Charlotte is :slight_smile:

Cas :slight_smile:

[quote]we have removed the online hiscore display from within the game and simply got a button to open a browser to our online hiscores webpage.

But, you’re thinking, this sucks, why not display the scores in the game?
[/quote]
Nice idea, but unfortunately it can backfire; a significant proportion of people dislike having their game interrupted by a switch to a web-browser. I say this based on a couple of years spent trying to increase communities uptake of their website - and even when there were thousands of pounds at stake (with a very high chance of getting some of it), lots of people didn’t like having to go to the website for anything, though they had nothing against the website itself.

So, I’d suggest you include either the top 5 current scores, or the five surrounding the player’s current best, inside the game. If they get to see one of those two, and know they have to go online to see the other, then you have a better chance of them doing so. The psychology of this is that you have got them to invest some of their interest, and hence some of their motivation, by giving them a little of what they want - if you can’t get any of something you want, there is a tendency to not bother; if you can get half of it easily, few people can resist getting the other half.

I’d personally show the top 5 scores in-game, so that they can see high-scores in game, and know that if they were REALLY good, they’d not need to go to the site (but in reality, once they’re that good, and that hooked, they’ll be going to the site anyway). But the lure of seeing themselves on a scoreboard should be enough to then push them to go to the website.

Shrug. Just a thought. :slight_smile:

I just saw this… thought it was interesting. Could it have been prevented, or would the needed safegaurds be impractical to implement? I personally suspect that such things can not be prevented without making sacrificews to the games efficiency… I think the approach they took here, where they were quickly able to restore a valid games state from just prior to the attacks is a reasonable compromise… until the attacks become to frequent and no progress can be made…

http://www.wired.com/news/games/0,2101,59034,00.html

[quote]imagine each client has “computational unit” which receives blocks of initial data, computes something and sends results back to the server. server doesn’t compute itself - it only dispatches packets of initial data and results between cliens(customers).

so, neither client knows what data it processes. and server needs not huge computational power.

yes, you need great “dispatching” system, evaluation of concrete client’s power, bandwidth and so on… but such a system could support really “massive” and “greed-for-processing-power” games based on the computational power of clients’ computers.
[/quote]
I think it’s interesting that this post was never addressed. I assume it’s because the topic had moved on to hi-score cheating from the original post regarding die-rolls. This is an area I’ve given some thought to, but am interested in hearing the opinion of the group before diving too deeply into it.

We have thought about it for our game, since we have a lot of advance-planning AI that could be broken out into manageable “packages” and that can work perfectly well autonomously.

Our thinking was to send out a computation/planning packet to a bunch of clients, and then do a “tell me 3 times” approach on the results.

The business side is trickier, though, because I think you need to let users “opt in” on this approach, otherwise it could easily backlash on you. We were thinking of giving users a discount on the subscription if they let us use their CPU.

[quote]The business side is trickier, though, because I think you need to let users “opt in” on this approach, otherwise it could easily backlash on you. We were thinking of giving users a discount on the subscription if they let us use their CPU.
[/quote]
I don’t think that is necessary. You aren’t collecting personal information and the user must choose to run your game. They can’t expect to run the game and use no CPU time. Obviously computation of some sort MUST be done on the client and results must be sent to the server no matter what you do. This is not so special as far as the user is concerned.

This is the solution SETI@Home uses to counter users trying to “improve” on their data analysis algorithms. By thowing out the result that falls outside the standard deviation they manage to keep things kosher. If all three results are way off (statistically improbable, but not impossible) they throw out the result and resend the data.
As long as the players don’t know who’s processing whose packets you should be safe from cheating/collusion.

[quote]I don’t think that is necessary. You aren’t collecting personal information and the user must choose to run your game. They can’t expect to run the game and use no CPU time. Obviously computation of some sort MUST be done on the client and results must be sent to the server no matter what you do. This is not so special as far as the user is concerned.
[/quote]
If you only want to borrow CPU time while the player is actually playing the game, then I completely agree with you. You’ve got to assume something is happening on your computer while you’re playing.

I think the issue only comes up when you want to get access to idle CPU time while the player is off doing something else (mowing the lawn, walking the dog, going to work, etc) In that case there is probably some value to providing added incentive to the user. Mind you, I don’t think you need much incentive. SETI@Home uses a ranking system where users who contribute the most CPU time get listed on the site. For broadband users idle CPU time costs nothing, so a discount scheme based on donated CPU cycles would probably be a pretty easy sell.

:slight_smile: At last! :slight_smile:

Well, we want to use this approach soon. So, will see :slight_smile:

[quote]I just saw this… thought it was interesting. Could it have been prevented, or would the needed safegaurds be impractical to implement? I personally suspect that such things can not be prevented without making sacrificews to the games efficiency… I think the approach they took here, where they were quickly able to restore a valid games state from just prior to the attacks is a reasonable compromise… until the attacks become to frequent and no progress can be made…

http://www.wired.com/news/games/0,2101,59034,00.html
[/quote]
This should be a caveat to all those out there claiming that client-server is the only way to prevent cheating. Until someone develops a provably unhackable client-server architecture your games will be susceptible to these kinds of attacks. No matter how you design your game client-server or peer-to-peer, the best you can ever do is make it really, really hard to cheat, which is all JohnMunsch was claiming.

Part of the reason why peer-to-peer solutions will have a much harder time taking hold is that the game itself has to be made secure, not just the hardware on which it runs, which requires a far greater understanding of network security than most game developers have. The fact that server security is largely someone else’s problem, leads many game developers to naively believe (at their peril) that they are somehow immune to cheating.

That said, it’s always amusing when people claim that something “can’t be done” simply because they can’t think of a way to do it. The fundamental assumption in all serious peer-to-peer computing work is that the individual peer cannot be trusted, and a significant amount of work is being done to address this issue by experts in network security who are too well paid to be making games. I have no doubts that if and when a secure peer-to-peer gaming solution does appear it will be largely based on work done outside the community.

One of the most promising aspects of p2p is that in a well-designed, fully decentralized architecture it’s impossible to significantly compromise the game without compromising a significant number of nodes. The larger the population of nodes, the larger the number of nodes you have to compromise to effectively cheat. In a client-server architecture, all you have to do is hack the server, and you control everything.

Of course, as soon as you make the game “administerable” you’ve given away the keys to the kingdom anyway :slight_smile:

Are you trolling? There are plenty of well-known “unhackable” client-server architectures. Someone here even pointed out a “provably” unhackable p2p architecture IIRC. It depends what your application is. I don’t believe that an unhackable FPS architecture exists, because one of the requirements for those games is that you cannot use aim-bots. OTOH, it’s easy to make an unhackable RTS. This is basic security-engineering: know what it is you’re trying to prevent, and often you find you can.

…or am I misunderstanding your point? Do you perhaps mean that it needs to be proved that the total sum of the software, the OS, the hardware, and the random interaction of cosmic rays (which cause random changes in RAM) and other acts of god when put together form an unhackable system?

And also because of several other side-effects such as every player needing large amounts of bandwidth. Broadband isn’t enough by a factor of 10 for some of the games I’ve seen proposed using P2P.

And probably also because so many serious developers are put off by the idiocy of some of the P2P evangelists who think it’s the Holy Grail. This debate has been going on for two decades. P2P solutions aren’t anything new, although recent CPU advances and internet connectivity are finally making them more viable.

No, no, and no.

You’re looking at it like a mathematician - assuming that the only way people will want to cheat is by having full direct control of their cheating. This is a fatal assumption, due to not fully realising the problems at hand. It is enough for cheaters to be able to merely probably alter the outcome - even in a random way. They will find ways to turn this to their advantage. A simple example is to repeatedly go up against a monster far too powerful for their character (or even, perhaps, invulnerable), and just hope they manage to corrupt one of the calculations for that battle. It might be enough. With hundreds of thousands of players, there’s actually a worryingly high chance of one of them pulling it off.

So the number of nodes you need to compromise doesn’t necessarily come into it at all.

And any client-server architecture where hacking it once gives you full access has been desigtned by someone who probably deserves to go bankrupt. It’s a stupid idea. The concept of multi-level security has been with us for several decades. The standard server hacks mostly rely on cracking one security level at a time - and the hacker can be effectively stopped at any one of the levels.

[quote]…or am I misunderstanding your point? Do you perhaps mean that it needs to be proved that the total sum of the software, the OS, the hardware, and the random interaction of cosmic rays (which cause random changes in RAM) and other acts of god when put together form an unhackable system?
[/quote]
I was simply in responding to the number of posts suggesting that p2p would never be viable (from a security standpoint) because you can only make it really difficult to cheat, not impossible. My point was this was always going to be the case whether you were dealing with P2P or Client-Server, and therefore was not a valid reason for immediately dimissing p2p.

Bandwidth requirements all depend on the game of course, but I completely agree that while p2p may buy you scalability, it doesn’t necessarily buy you performance.

I agree with you on all counts here.

Damn! I knew that fine arts degree would get me in trouble! :slight_smile:

Agreed, but this isn’t limited to p2p architectures.

Once again, this is largely dependent on your game design and the p2p architecture. As you’ve pointed out p2p architectures have been around (and maturing) for years. There are a number of techniques designed to reduce or eliminate the effects of corrupted data and rogue peers in much more security-conscious domains than games. In most of these architectures, it’s not that you need to compromise a significant percentage of nodes to have a significant impact, but that you need to compromise a significant percentage of nodes in order to have any impact at all, so the number of nodes makes a huge difference. This is, however, a much larger discussion.

You did read the article that prompted this post, right? Well, then the folks at UBI Soft and Wolfpack Studios deserve to go bankrupt. :slight_smile:

Multi-level security systems still get hacked. Games still get hacked, (as do banks, major corporations, intelligence agencies, etc) Yes, it’s getting much harder, but as long as it’s possible for someone to legitimately access and administer a system, it will be possible to exploit that capability. We can only make it really, really hard and keep trying to make it harder. Which was my point all along. To think there’s some magical security system that’s going to make you completely immune from cheating (client-server or p2p) is hopelessly naive. There are plenty of arguments for why an architecture may be suitable (or unsuitable) all of which should be taken into account when designing and implementing a game. I’m merely questioning the validity of the security argument in immediately discounting one approach or the other.

AT and I debate this a lot around Sun.

IMO There are a few thinsg that are true:

(1) As long as humans are in the loop no system is perfect. Most servers get hacked due to pilot errors on the part of those with legitimate access.

(2) You CAN however raise the bar high and perhapse make it too difficult for at least your average script kiddie. The simplest (and only proven) way to do this is with server-side logic in a well designed and security audited backend.

(3) Unfortunately the best solutions for good-gameplay tend to be diametrically opposed to this. (Moving functionaliy close to the player perceiving it). This is a fundemental tension in online game design and balancing it is the job of a good technical architect.

(3) Cheating is a much bigger issue then most people realize. Having managed a commercial online evnvironment (DSO) let me throw a few things out most people don’t think of:

(A) Cheaters collude. Naive cross checking thus can fail. Related to this is the fact that only one guy in a group of cheaters needs to be smart enough to break your security as long as his solution can be replicated through scripts or simple instructions.

(B) It often isnt necessary for cheaters to actually go undetected or even effect the “game state.” Cheaters will resort to remarkably primative means if it gets them their goal-- up to and including just crashing the system. That goal can be as simple as annoying others or as complex as stopping the game because they are in a losing position.

This is another problem with destributed computing solutions. I don’t have to UNDERSTAND the data in order to crash out the system.

JK

[quote]AT and I debate this a lot around Sun.
[/quote]
and we’re bound to debate it here in the forums as well. :slight_smile:

[quote]IMO There are a few things that are true:

(1) As long as humans are in the loop no system is perfect. Most servers get hacked due to pilot errors on the part of those with legitimate access.
[/quote]
Very, very true.

Absolutely, and most folks working on secure P2P architectures have set the bar *much higher than your average script kiddie. (like military and corporate espionage)

[quote]The simplest (and only proven) way to do this is with server-side logic in a well designed and security audited backend.
[/quote]
I agree that this is the simplest way to do it, which is why it’s also the most popular, and by proven, I assume you mean “tried and true” A number of techniques have been developed for securing various types large-scale p2p systems from abuse, but they are not simple solutions and may not be readily available until more mature p2p infrastructures emerge.

quote Unfortunately the best solutions for good-gameplay tend to be diametrically opposed to this. (Moving functionaliy close to the player perceiving it). This is a fundemental tension in online game design and balancing it is the job of a good technical architect.
[/quote]
True, and unfortunately most currently viable, secure P2P architectures are more about making use of the network of CPUs rather than moving priviledged functionality closer to the client accessing it. In most cases the number of hops are the same, so the benefits of P2P tend to be in scalability, not performance. There is some interesting work being done in military applications for securing true p2p networks where all computing is done locally, but unfortunately my NDA prohibits me from saying more.:wink: Needless to say, this won’t be immediately useful to game developers, but may be more so in a few years.

Yep, and most secure P2P solutions assume this. As long as I know who’s processing my data I can cheat. This was the issue with the die rolling example for more than two players. The key is reducing or effectively eliminating the possibility of knowing who’s processing your data. Not easy, but doable as demonstrated by the SET@Home project (yep, people tried to cheat there too, some out of a desire to “improve” on the data analysis algorithms, others for less savory reasons)

True. This is also a recognized issue in secure p2p development and one where the self-healing qualities of the network actually serves as potection against these kinds of attacks. By eliminating a single point of failure, you can prune renegade nodes without disrupting the system. Mind you, this all depends on the specific design of the game and the type of p2p architecture you employ. Most of the p2p architectures I’m talking about are brokered (mediated by one or several protected servers, whose role it is manage the communication and behavior of nodes on the network)

Further note: For those of you wondering where all this p2p security work is happening, take a look under “Mobile Ad-Hoc Networks” also known as MANET’s. Most of these applications assume that a) any node (as well as the routes between nodes) may be compromised at any time, that b) multiple nodes may be compromised and collude to fool (cheat) or disrupt the system, and that c) there may be no central mediator to prevent these attacks. There is quite a bit of money (especially military money) going into this research (certainly more money than the gaming community can afford at this point) and a number of interesting solutions have been developed.

Whether any or all of this work will be applicable to gaming remains to be seen, but as it shares many of the same concerns, I’m optomistic that some number of categories of online games will benefit from this work.

Great! I was just going to ask about literature on the subject, but Google seems to bring up enough stuff when one does a search for “Mobile Ad-hoc Networks”. You wouldn’t happen to know of any book for an engineer, rather than a researcher, on the work that has been done in the area though? (I’m thinking of something like Bruce Schneier’s Applied Cryptography but for MANETs)

[quote]Great! I was just going to ask about literature on the subject, but Google seems to bring up enough stuff when one does a search for “Mobile Ad-hoc Networks”. You wouldn’t happen to know of any book for an engineer, rather than a researcher, on the work that has been done in the area though? (I’m thinking of something like Bruce Schneier’s Applied Cryptography but for MANETs)
[/quote]
I’ll look through my notes, and ask around. I’m pretty sure I’ve seen at least one recently.