![]() | ![]() |
It is built entirely from scratch, so it didn't inherit any bugs from any other products.
There are no known attacks against it.
It uses public key cryptography (or some other secure-sounding technology).
In addition, some vendors who make this claim apply extremely narrow definitions of "publicly available code". For instance, they may in fact use licensed code that is distributed in source format and is free for noncommercial use. Check copyright acknowledgments -- a program that includes copyright acknowledgments for the University of California Board of Regents, for instance, almost certainly includes code from some version of the Berkeley Unix operating system, which is widely available. There's nothing wrong with that, but if you want to use something based on secret source code, you deserve to get what you're paying for.
People also point out that publicly available code gets more bug fixes and more rapid bug fixes than most privately held code; this is true, but this increased rate of change also adds new bugs.
There are two separate problems with services that are run as "unprivileged" users. The first is that the privileges needed for the service to function carry risks with them. A mail system must be able to deliver mail, and that's inherently risky. The second is that few operating systems let you control privileges so precisely that you can give a service exactly the privileges that it needs. The ability to deliver mail often comes with the ability to write files to all sorts of other places, for instance. Many programs introduce a third problem by creating accounts to run the service and failing to turn off default privileges that are unneeded. For instance, most programs that create special accounts to run the service fail to turn off the ability for their special accounts to log in. Programs rarely need to log in, but attackers often do.
Nonetheless, it's possible to write secure software on almost any operating system, with enough effort, and it's easy to write insecure software on any operating system. In some circumstances, one operating system may be better matched to the service you want to provide than another, but most of the time, the security of a service depends on the effort that goes into securing it, both at design and at deployment.
Similarly, there's good public key cryptography, bad public key cryptography, and irrelevant public key cryptography. Merely adding public key cryptography to some random part of a product won't make it secure. The same is true of any other technology, no matter how exciting it is. A supplier who makes this sort of claim should be prepared to back it up by providing details of what the technology does, where it's used, and how it matters.
Other lists of vulnerabilities are often a better reflection of actual risks, since they will list problems that the vendor has chosen to ignore and problems that are there by design. On the other hand, they're still very much a popularity contest. The "exploit lists" kept by attackers, and people trying to keep up with them, focus heavily on attacks that provide the most compromises for the least effort. That means that popular programs are mentioned often, and unpopular programs don't get much publicity, even if the popular programs are much more secure than the unpopular ones.
In addition, people who use this argument often provide big scary numbers without putting them in context; what does it mean if you say that a given web site lists 27 vulnerabilities in a program? If the web site is carefully run by a single administrator, that might be 27 separate vulnerabilities; if it's not, it may be the same 9 vulnerabilities reported three times each. In either case, it's not very interesting if competing programs have 270!
In general, publicly available code is modified faster than private code, which means that security problems are fixed more rapidly when they are found. This higher rate of change has downsides, which we discussed earlier, but it also means that you are less likely to be vulnerable to old bugs.
A process is in place to distribute notifications of security problems and updates to the server.
For instance, a mail system may list "security" as a goal because it incorporates anti-spamming features or facilitates encryption of mail messages as they pass across the Internet. Those are both nice security goals, but they don't address the security of the server itself if an attacker starts sending it evil commands.
You may not be able and willing to review the code under appropriate conditions. That's usually OK, but you should at least verify that there is some procedure for code review.
ore subtly, if you're getting a complex commercial package, you should be able to trust the distribution and release mechanism, and know that you have a complete and correct version with a retrievable version number. If your commercial vendor ships you a writable CD burned just for you and then advises you to FTP some patches, you need to know that some testing, integration, and versioning is going on. If they don't digitally sign everything and provide signatures to compare to, they should at least be able to provide an inventory list showing all the files in the distribution with sizes, dates, and version numbers.