Tuesday
Jan282014

UnboundID LDAP SDK for Java 2.3.6

We have just released the 2.3.6 version of the UnboundID LDAP SDK for Java. You can get the latest release online at the UnboundID Website or the SourceForge project page, and it's also available in the Maven Central Repository.

This is primarily a maintenance release, containing bug fixes and some minor enhancements over the 2.3.5 version. A fully copy of the release notes for this version may be found on the UnboundID website, but some of the most notable changes include:

  • It is now possible to create a connection pool with a health check, without the need to set the health check after the pool has been created. The primary benefit of this approach is that the health check will be used for the initial set of connections that are established when the pool is created.

  • The LDIF change record implementations have been updated to add support for change records that include request controls.

  • Update the GSSAPI and DIGEST-MD5 SASL bind requests to support the use of the integrity and confidentiality quality of protection modes.

  • Improve support for processing asynchronous operations so that it is possible to invoke an asynchronous add, compare, delete, modify, or modify DN operation without needing to provide a result listener if the result will be accessed using the Future API. Also update the connection pool to make it possible to invoke multiple add, compare, delete, modify, modify DN, and/or search operations concurrently over the same connection as asynchronous operations.

  • Ensure that the thread-local connection pool uses the LDAPConnection.isConnected method as further verification that a connection is still established before attempting to use it.

  • Fix a bug in the in-memory directory server that prevented the updated values of certain attributes (e.g., modifiersName and modifyTimestamp) from appearing in the entry returned in a post-read response control.

  • Fix a potential thread safety bug in the get entry connection pool health check.

Thursday
Nov212013

UnboundID LDAP SDK for Java 2.3.5

We have just released the 2.3.5 version of the UnboundID LDAP SDK for Java. You can get the latest release online at the UnboundID website or the SourceForge project page, and it's also available in the Maven Central Repository.

There are a lot of improvements in this release over the 2.3.4 version. A full copy of the release notes for this version may be found on the UnboundID website, but many of the improvements are related to connection pooling, load balancing, and failover, but there are other additions and a number of bug fixes also included in this release. Some of the most notable changes include:

  • Added a new fewest connections server set. If used to create a connection pool in which connections span multiple servers, the pool will try to send each newly-created connection to the server with the fewest number of active connections already open by that server set.

  • Updated the LDAPConnectionPool class to make it possible to specify an alternate maximum connection age that should be used for connections created to replace a defunct connection. In the event that a directory server goes down and pooled connections are shifted to other servers, this can help connections fail back more quickly.

  • Updated the failover server set to make it possible to specify an alternate maximum connection age for pooled connections that are established to a server other than the most-preferred server. This can help ensure that failover connections are able to fail back more quickly when the most-preferred server becomes available again.

  • Added a new version of the LDAPConnectionPool.getConnection method that can be used to request a connection to a specific server (based on address and port), if such a connection is readily available.

  • Added a new LDAPConnectionPool.discardConnection method that can be used to close a connection that had been checked out from the pool without creating a new connection in its place. This can be used to reduce the size of the pool if desired.

  • Added a new LDAPConnection.getLastCommunicationTime method that can be used to determine the time that the connection was last used to send a request to or read a response from the directory server, and by extension, the length of time that connection has been idle.

  • Updated the connection pool so that by default, connections which have reached their maximum age will only be closed and replaced by the background health check thread. Previously, the LDAP SDK would also check the connection age when a connection was released back to the pool (and this option is still available if desired), which could cause excess load against the directory server as a result of a number of connections being closed and re-established concurrently. Further, checking the maximum connection age at the time the connection is released back to the pool could have an adverse impact on the perceived response time for an operation because in some cases the LDAP SDK could close and re-establish the connection before the result of the previous operation was made available to the caller.

  • Updated the LDIF writer to add the ability to write the version header at the top of an LDIF file, to ensure that modify change records include a trailing dash after the last change in accordance with the LDIF specification, and to fix a bug that could cause it to behave incorrectly when configured with an LDIF writer entry translator that created a new entry as opposed to updating the entry that was provided to it.

  • Dramatically improved examples included in the Javadoc documentation. All of these examples now have unit test coverage to ensure that the code is valid, and many of the examples now reflect a more real-world usage.

  • Improved the quality of error messages that may be returned for operations that fail as a result of a client-side timeout, or for certain kinds of SASL authentication failures. Also improved the ability to perform low-level debugging for responses received on connections operating in synchronous mode.

  • Updated the in-memory directory server to support enforcing a maximum size limit for searches.

  • Added a couple of example tools that can be used to find supposedly-unique attribute values which appear in multiple entries, or to find entries with DN references to other entries that don't exist.

  • Made a number of improvements around the ability to establish SSL-based connections, or to secure existing insecure connections via StartTLS. Improvements include making it possible to specify the default SSL protocol via a system property so that no code change may be required to set a different default protocol, allowing the ability to define a timeout for StartTLS processing as part of the process for establishing a StartTLS-protected connection.

  • Fixed a bug that could cause the LDAP SDK to enter an infinite loop when attempting to read data from a malformed intermediate response.

  • Fixed a bug that could cause problems in handling the string representation of a search filter that contained non-UTF-8 data.

Friday
Jun212013

UnboundID LDAP SDK for Java 2.3.4

We have just released the 2.3.4 version of the UnboundID LDAP SDK for Java. You can get the latest release online at the UnboundID website or the SourceForge project page, and it's also available in the Maven Central Repository.

The main reason for this release is the disclosure of a security vulnerability (VU#225657) that affects the Oracle Javadoc tool and all Javadoc content generated with affected versions of that tool. This included Javadoc documentation included as part of earlier versions of the UnboundID LDAP SDK for Java. The 2.3.4 release of the LDAP SDK has been generated with an updated version of the Javadoc tool that should no longer be vulnerable to the referenced bug.

There are a few other updates in this release, including:

  • We have fixed a bug that could cause a pooled connection to be unnecessarily closed and re-established when performing a simple bind on a connection operating in synchronous mode.

  • We have fixed a bug in the schema parser that could prevent it from parsing certain schema elements from their string representations if the last element in those elements was an OBSOLETE, SINGLE-VALUE, or NO-USER-MODIFICATION token and there was no space between that token and the closing parenthesis that followed it.

  • We have updated the disconnect handler mechanism to provide more assurance that there would not be multiple notifications for a single disconnect.

  • We have added support for the Microsoft DirSync control, which may be used to discover information about changes processed in an Active Directory server.

  • We have fixed a bug in the entry validator (and the validate-ldif tool that uses the entry validator) that could incorrectly classify entries that had multiple structural object classes as entries that did not have any structural class.

  • We have updated the LDAP command-line tool API to make it possible to create tools that support establishing LDAP connections but without offering options to authenticate those connections. We have also updated the API to make it possible to provide passwords to the tool by interactively prompting for them rather than requiring them to be provided as command line arguments or included in clear-text files on the filesystem.

Wednesday
May082013

UnboundID LDAP SDK for Java 2.3.3

We have just released the 2.3.3 version of the UnboundID LDAP SDK for Java, with a number of improvements and bug fixes over the 2.3.1 release (NOTE: the only difference between the 2.3.2 and 2.3.3 releases is a fix to a javadoc formatting problem). You can get the latest release online at the UnboundID website or the SourceForge project page, and it's also available in the Maven Central Repository.

As usual, the release notes provide a complete overview of changes made in this release, but some of the most significant updates include:

  • A number of connection pooling improvements and fixes, including: It is now possible to create a connection pool even if a failure is encountered while establishing the initial set of connections, which is useful for cases in which the target directory server is unavailable when the application is starting. You can establish and/or close connections in parallel, which helps speed things up if the pool has a lot of connections or the server is remote or otherwise slow to respond. Pooled connections that cache server schema information are established more quickly. Methods have been added to help undo the effects of processing a bind on a pooled connection so that connections stay authenticated as the account used when they were initially established.

  • A number of persistence framework improvements and fixes, including: The default object encoder now provides generic support for Serialized objects. A new getAll method makes it possible to retrieve all objects of a specified type below a given base DN. Fix a bug related to generating an appropriate set of modifications for an updated object, and for generating filters to define criteria to use when searching for objects. Improve the output of the generate-schema-from-source tool.

  • A number of In-Memory Directory Server and LDAP listener improvements and fixes, including: It is now possible to combine multiple schema files to create the schema for the in-memory directory server. The canned response request handler now has the ability to customize the entries and references to return for a search operation. The LDAP debugger tool now has the ability to accept SSL-based connections.

  • A number of LDIF-related improvements and fixes, including: Add an LDIFWriterEntryTranslator interface that can make it easier to transform entries to be written to LDIF. Fix a bug that could cause problems reading LDIF records with comments that have been wrapped across multiple lines. Add a convenience method to get the entry to be added from an LDIF add change record (whereas previously it was only possible to get the DN and attributes separately).

  • A number of tool-related improvements and fixes, including: Add a new multi-server command-line tool API which makes it easier to create tools that need to interact with multiple directory servers. Make it easier to register JVM a shutdown hook to invoke code when the JVM in which the tool is running begins shutting down.

  • Add the ability to specify the default SSL/TLS protocol version when creating secure connections (or using StartTLS) for cases in which the caller doesn't explicitly specify a protocol.

  • Dramatically improve the performance of the Attribute equals method for attributes that have a large number of values.

Tuesday
Jun122012

How LinkedIn Missed Out

Within the last week, LinkedIn, eHarmony, and Last.fm have all announced security breaches that resulted in millions of user passwords exposed to the world. While the passwords were encoded with one-way digests, that didn't stop attackers from discovering the clear-text passwords for a large percentage of those accounts.

There's no such thing as a perfectly secure system, and brilliant and determined hackers have a lot of tricks up their sleeve, from technical assaults like exploiting unpatched software vulnerabilities and to social engineering cons like sweet-talking secretaries. But there are a lot of things that LinkedIn and the others should have done to help prevent this kind of breach.

Consider Using Third-Party Authentication

The best way to ensure that attackers can't break into your system and steal user passwords is for your system to not have any user passwords. Authentication systems like OpenID and OAuth allow you to push the responsibility for authenticating users (and securing their accounts) to a third party. If you're willing to leave the authentication to an organization like Google, Facebook, or Yahoo, and your users are willing to accept this solution, then you don't need to worry about storing credentials at all.

Note, however, that passwords may not be the only sensitive information that you need to store in your system. In fact, attackers generally don't care about user passwords themselves but rather the other information about users that passwords can be used to access. As a result, you should treat all the information that you store about users with the utmost care and sensitivity because any compromise of your system will be bad for both your users and your reputation, regardless of whether that compromise includes login credentials.

It is therefore important that you store your data in a repository that offers the kinds of security features you need to help ensure it is adequately protected. The UnboundID Directory Server offers a wide range of security features, including several that you won't be able to find in other products. Some of these features are primarily intended to help secure user passwords, but many can be applied for all kinds of information.

Use Salted Password Hashes

The people who designed the security systems at these companies knew enough to use cryptographic digests (also called hashing algorithms) to protect the passwords. These are mathematical algorithms that transform passwords in a way that is believed to be impossible to reverse. Good cryptographic digests (like the SHA-1 algorithm used by LinkedIn) offer a good level of protection because they prevent attackers from knowing useful information like how long a password is or what kinds of characters it might contain, and if they're guessing passwords then they won't be able to tell how close any guess is to being right.

But one of LinkedIn's big mistakes is that they didn't salt their passwords. One-way digest algorithms always generate exactly the same hash for a given input. This is essential for their use in applications like protecting passwords and verifying data integrity, but it also means that attackers can (and do) prepare ahead of time. I may not be able to look at the string "6367c48dd193d56ea7b0baad25b19455e529f5ee" and know that it's the SHA-1 digest for the string "abc123", but if I have a big dictionary of commonly-used passwords, I can just run each of the strings in that dictionary through the SHA-1 digest in order to get the encoded representation, and when I find an encoded password that looks like "6367c48dd193d56ea7b0baad25b19455e529f5ee", I can use my precomputed dictionary to do a reverse lookup and know that represents the password "abc123".

Password salting prevents this kind of attack because it adds a random element into each encoded password. When you're going to create a salted password, you first come up with some random data (for example, "kMPsCwaT") and prepend it to the clear-text password (like "kMPsCwaTabc123"). Then, you run that through the digest algorithm (to get a hash like "51c4125fe2e2e94bdefa8f7a8e5c12ebfd94833b"), and finally prepend the salt to the hash (e.g., "kMPsCwaT51c4125fe2e2e94bdefa8f7a8e5c12ebfd94833b"). When a user is authenticating, it then becomes necessary to pull the salt off the encoded password and prepend that to the clear-text password that they provide before running it through the digest algorithm.

Because salted passwords contain random data, it's not possible for an attacker to have a dictionary prepared in advance. Running through a dictionary (or using a brute-force approach to try every possible combination of characters) will take a lot longer to crack a password, and even if you are successful for one user, that won't help for any other user even if they chose the same password because it will have a different salt and therefore a different encoded representation.

The UnboundID Directory Server supports password salting out of the box in a manner that is completely transparent to clients. It can be used in conjunction with MD5, SHA-1, and any of the 256-bit, 384-bit, and 512-bit SHA-2 variants. It's on by default, and would have made it a lot harder (or at least taken a lot longer) for attackers to discover the clear-text representations of the stolen passwords.

Use Expensive Encoding Algorithms

If someone gets access to an encoded password and knows how it was encoded, then there really is no way to prevent them from discovering the clear-text password used to generate that hash. If they want it bad enough, they can simply try every possible combination of characters, and with relatively cheap access to distributed computing, it's possible to generate trillions of hashes per second. The only variables that affect this are the strength of the password (as will be discussed below), and the algorithm used to generate its encoded representation. For example, using the 512-bit SHA-2 instead of the 160-bit SHA-1 just about doubles the length of time required to generate a digest, which in turn means that it just about doubles the length of time an attacker will have to spend trying to crack a password.

To make the process even more expensive, you can have your password encoding process use multiple rounds of hashing. For example, take the clear-text password, salt it, and run that through the digest algorithm. Then take the resulting hash and run it through the digest algorithm again. And again. Repeat this process so that the final encoded password requires 5000 hashes. If you're using 512-bit SHA-2, then this process is now about ten thousand times more expensive than the simple SHA-1 process used for the leaked passwords, meaning that it will take an attacker ten thousand times longer to crack a password with this encoding scheme than with SHA-1.

The UnboundID Directory Server provides support for password encoding algorithms that employ thousands (or potentially even millions, if you're really paranoid) of rounds of hashing using the 256-bit or 512-bit SHA-2 algorithms. This algorithm is already available for use in holding login passwords for many Linux and UNIX systems, so it's been designed by and carefully scrutinized by many security experts and is considered extremely strong.

Reject Weak Passwords

If an attacker obtains an encoded password, then the biggest risk of that password being cracked comes from the strength of the password itself. A password that is a word from the dictionary or a name or date or other common string will almost certainly be broken in a fraction of a second. If a password is relatively short, then so also will be the time required to discover it even if it's necessary to try every possible combination of characters. For example, if a beefy system can generate 100 billion hashes per second, then it will only take about two seconds to try every possible combination of eight lowercase ASCII letters.

The two biggest factors in password complexity are the length of the password and the set of possible characters it may contain (which we'll call the password alphabet), and they're related by the mathematical formula al, where a represents the size of the password alphabet and l is the length of the password. For example, if the alphabet is the set of lowercase ASCII letters and the length is eight characters, then there are 268 = 208,827,064,576 possible password combinations (which seems like a big number, until you realize how fast computers are at performing these computations). However, if you instead consider both uppercase and lowercase letters, numeric digits, and a number of symbols, then the alphabet can grow to about 95 characters, and there would then be 958 = 6,634,204,312,890,625 possible values, which is nearly 32,000 times larger and would take the better part of a day to crack on a system capable of trying a hundred billion passwords per second. And increasing the length to ten characters takes the time to crack from a little over eighteen hours to a little over eighteen years.

The best way to ensure that users have strong passwords is to configure your system to enforce restrictions around password length, the kinds of characters that may be used, and other kinds of constraints. The UnboundID Directory Server ships with support for a number of password validators, including:

  • A dictionary validator, which can be used to ensure that users aren't allowed to supply passwords that exist in a given dictionary file (optionally including testing the reversed password).
  • A length validator, which can be used to ensure that users aren't allowed to choose a password that is too short (or potentially too long, although there's little reason to configure a maximum length).
  • A character set validator, which can be used to ensure that passwords include at least a specified minimum number of characters from a number of different character sets.
  • A unique characters validator, which can be used to ensure that passwords contain at least a specified number of different characters.
  • A repeated characters validator, which can be used to ensure that passwords don't contain repeated strings.
  • An attribute value validator, which can be used to ensure that the supplied password does not match the value of any other attribute in the user's entry.
  • A regular expression validator, which can be used to ensure that the supplied password matches (or alternately, does not match) a given regular expression.
  • A similarity validator, which can be used to ensure that when a user is changing his or her password, the new password is not too similar to the previous password.

In addition, the UnboundID Server SDK can be used to develop additional custom password validators if those provided with the server by default are not sufficient. It also provides a password history feature that can prevent users from repeatedly using the same set of passwords.

When allowing users to choose their passwords, it is also important to ensure that those passwords are always supplied in clear-text. Some systems allow users to provide the password in a form that is already encoded in a manner that the server can interpret. This is undesirable, because if a user is allowed to supply a pre-encoded password, then the server has no idea what the clear-text representation is, and therefore cannot determine whether it satisfies all of the configured password quality requirements.

Of course, strong passwords aren't much good if users can't remember them. This is a very real problem that needs to be considered, and it is compounded by the fact that your users will likely need to have accounts in other systems. In reality, this means that users will probably choose one of the following options:

  • They will use a password manager that automatically keeps track of all the usernames and passwords they use across all systems they access. This is the best solution, and one that you should probably recommend, since it means that they only have to remember one password for the password manager, and let it remember all the others. Unfortunately, less experienced users may find this prospect kind of scary.
  • Write their passwords down or keep them in a file. This is kind of the low-tech version of a password manager, and as long as all copies of the password list is protected, then it's reasonably safe.
  • Use the same password (or a small set of passwords) for all sites they access. This is dangerous, because if their password is compromised on one site, then it may be used to gain access to other sites. There's not much that can be done to prevent this (other than requiring multifactor authentication), but unless it's an account that has elevated privileges, it will be more likely to adversely affect that one user than the overall security of your data store.
  • Forget the password they chose and rely on the "I forgot my password" mechanism to reset it each time they need to access your system. If the reset process is automated and requires them to receive an e-mail or SMS message, then it may be a relatively secure approach (assuming that the e-mail address and/or phone number were previously verified), but if it involves talking to a human then it will probably raise your support costs and may allow a skilled social engineer to convince the support personnel to grant them access to someone else's account.

It is unfortunate that non-technical users will be the ones who find strong password requirements to be the most onerous and unpleasant, but it is important to decide whether end user convenience is more important than end user security.

Prevent Online Access to Passwords

While it isn't clear exactly how the attackers were able to obtain these large lists of encoded passwords, it's likely that they were able to manipulating the identity store (or an application using it) into performing a query that exposed this information to the requester. For example, if they were using a relational database, then maybe they discovered a flaw in an application that allowed for an SQL injection.

Identity data stores (whether relational databases, LDAP directory servers, or some other kind of repository) should be like roach motels for passwords: passwords can go in, but they can't come out. It shouldn't be possible for even an all-powerful user to perform a query that can retrieve password information, and the system should also require that all passwords supplied to it (whether validating credentials during authentication or supplying a new password) be processed over a secure communication channel so that anyone with the ability to observe the network traffic cannot examine it in order to learn passwords.

The UnboundID Directory Server includes a sensitive attributes feature that makes it possible do exactly this for passwords and other kinds of sensitive information. You can easily configure the server so that passwords will be stripped from results even for requests from root users, and to require that any operations attempting to manipulate the values of such attributes will be allowed only over a secure connection. It is possible to configure this restriction to be in effect across the server, but it is also possible to tailor the behavior to the requester (based on a number of characteristics, like who's asking, how they're authenticated, where they're coming from, whether the connection is secure, etc.), so that if there is an application that does have a legitimate need to be able to retrieve this information, the server can be configured to only return the data to that application.

The UnboundID Directory Server also includes very powerful and flexible logging capabilities, and it is possible to configure the server to log information about any operation in which passwords (or other attributes which may contain sensitive data) is returned to the client. This can be useful for auditing purposes so that if an attacker is somehow able to issue a query that returns sensitive information, you can see exactly which entries and which attributes were accessed.

Encrypt Server Data, Including Backups and Data Exports

Even if they weren't able to get the data store to expose the passwords via a network request, it's possible that they were able to get the password data in some other way. For example, if they were able to obtain access to the system on which the data was stored, then perhaps they were able to examine the database files directly, or perhaps they were able to obtain a backup or export of the data that included passwords.

It is important to ensure that adequate protection is in place for all copies of the server data, whether on the live running system or in backups. Certainly this includes things like restricting access to systems which can access this information and ensuring that only authorized users are able to access the data files, but it is also recommended that you encrypt such information so that even if an attacker does gain access to it, they won't be able to extract anything useful from it.

The UnboundID Directory Server makes it easy to enable encryption for all data so that it is never stored in the clear. It also provides the ability to encrypt backups and LDIF exports so that the information is protected in these forms as well.

Prevent Repeated Authentication Attempts

If you have adequately protected your system to prevent retrieving passwords from the data store, and if all copies of the data are encrypted, then attackers should be prevented from obtaining encoded passwords for users. However, there may be other ways to crack user passwords. The most common of these approaches is simply to try to repeatedly authenticate as that user with different passwords until you finally get it right. This will be much slower than the offline attacks that are available with access to encoded passwords, but given enough time, determination, and luck, it may just work. Of course, these kinds of attacks can be thwarted by limiting the number of unsuccessful authentication attempts that will be allowed for a user account before that account is locked.

The UnboundID Directory Server can be configured to lock accounts after too many authentication failures so that any subsequent attempts will fail even if the right credentials are given. This lockout can be either temporary (so that additional login attempts will be allowed after a specified period of time) or permanent (so that the user will not be allowed to authenticate at all until an administrator resets the password). It can also be configured to log and/or notify administrators when an account is locked as a result of failures, along with a number of other significant password policy events.

Enforce Authentication Restrictions

In many environments, it is common for each application to have its own account that will be used to perform operations in the server, and that application account may have more rights than normal user accounts. If an attacker is able to compromise an application account, then he or she may be able to wreak more havoc than with other user accounts.

To help prevent this, it is advisable to restrict application accounts so that they are less likely to be useful to attackers even if their credentials are discovered. For example, you may want to restrict their use to a certain range of IP addresses and/or to only allow them to authenticate in certain ways. The UnboundID Directory Server provides a number of features like this, including restrictions based on client address, authentication type, and communication security, as well as restrictions around the use of proxied authorization. In addition, client connection policies can also be used to permit or restrict operations based on a number of characteristics about the client.

Support Multifactor Authentication

One great way to mitigate the risk of compromised passwords is to make use of multifactor authentication, which require the user to provide multiple pieces of information to confirm his or her identity. This is usually manifest as two-factor authentication combining something you know (like a password) with something you have (e.g., some kind of device capable of generating one-time passwords or PINs). In a system that uses multifactor authentication, if an attacker discovers a user's password, then they won't be able to do anything with it unless they also have a way of obtaining the other credentials.

Multifactor authentication used to be a rather inconvenient prospect because it required that you actually provide your users with a physical device like a SecurID token (which has a numeric display with numbers that change every minute), and if you didn't have that token with you, then you couldn't authenticate. However, the proliferation of mobile phones has made this a much more realistic possibility. For example, you can configure your Google account to require multifactor authentication, combining a password with a one-time code obtained in one of the following ways:

Using the Google Authenticator app, which uses the time-based one-time password (TOTP) algorithm defined in RFC 6238. This option does not require any kind of network communication. By having a one-time password sent to your mobile phone as a text message. By having a one-time password read to you by a speech synthesizer over a voice call.

With multifactor authentication support enabled, an attacker will need to get access to your mobile phone in addition to figuring out your password before they'll be able to access your Google account.

Similarly, Facebook can be configured to require a one-time code sent as an SMS message any time you log in from a system it doesn't recognize. There are a number of other applications which provide some level of support for multifactor authentication, but it's unfortunate that there are still so many who do not offer it as an option so that their more technical and security-conscious users can take advantage of the additional security that it can provide.

At present, the UnboundID Directory Server supports multifactor authentication in the form of a password combined with a PIN generated using the TOTP algorithm (and is therefore compatible with the Google Authenticator app, along with other software capable of generating these codes). However, support for additional multifactor authentication schemes may be added in the future.

Plan for Disaster

No matter how tightly locked down your system may be, you should have a plan for dealing with a security breach in which an attacker is able to successfully obtain passwords and/or other sensitive information from your system. Hopefully you'll never have to put this plan into action, but it's far better to have a strategy you may not use than to find yourself without one when you really need it.

The first thing that you should do is to identify what information may have been compromised, and then tell your users about it. Trying to cover it up may be illegal, and it prevents affected users from trying to mitigate the risk of the attack while giving the bad guys more time to use the data they stole. It will also likely be more damaging to your reputation if users hear about a breach in your systems from someone else before they hear it from you.

If information about passwords may have been exposed, regardless of whether the passwords were encoded, then you should encourage or require users to change their passwords as soon as possible (and the UnboundID Directory Server offers a feature that can be used to require users to change their passwords by a given date/time). However, you should also assume that if login credentials were obtained, an attacker could have authenticated using those credentials and obtained access to other information about the account.

You should also try to determine how the attack was conducted so that you can make any changes in your system to help prevent the attack from recurring. If there isn't enough information available to determine the attack vector, then you may need to make broad changes to tighten the overall security of the environment. You will likely also want to ensure that appropriate logging is in place to make it easier to discover and analyze attacks in the future.

Finally, you should work with the vendors of any software that attackers were able to breach so that they can better understand how their software is being targeted and identify whether any changes may be needed to help prevent future problems. Even if the software you're using has features which could have prevented the attack from succeeding, the vendor may wish to consider enabling those features by default or better highlighting them in their documentation. Certainly we at UnboundID would be very interested to hear of attacks (whether successful or not) against our software so that we can continue to reevaluate the product features and default configuration. And if you have suggestions for improvement, we'd love to hear them as well.

Friday
May042012

UnboundID LDAP SDK for Java 2.3.1

The 2.3.1 release of the UnboundID LDAP SDK for Java primarily includes a number of bug fixes and minor functionality enhancements, many of which are in direct response to requests from users. You can get the latest release online at the UnboundID website or the SourceForge project page, and it's also available in the Maven Central Repository.

As usual, the release notes provide a complete overview of changes made in this release, but some of the most significant updates include:

  • The 2.3.0 release added the ability for the LDAP SDK to respect client-side timeouts for operations invoked via the asynchronous API. Unfortunately, for applications which had a very high rate of asynchronous operations, a bug in this implementation could cause excessive memory pressure (potentially including out of memory errors). That bug has been corrected.

  • Also in the 2.3.0 release, a change was made to prevent simultaneous use of the socket factory associated with the client connection. This was done in response to the discovery that some socket factories in the IBM JVM (at the SSL socket factory, if not others) may fail if an attempt was made to use them concurrently by multiple threads. Unfortunately, while this change made the LDAP SDK safer to use on such platforms, it also introduced a problem for other JVMs that could cause long delays in the ability to establish a connection following an attempt to connect to a server that is either unresponsive or slow to respond. In an attempt to strike a balance between these problems, concurrent use will be allowed on JVMs known to be threadsafe (including those provided by Sun, Oracle, and Apple), while still defaulting to single-threaded use on other JVMs. In addition, it is now possible to configure whether this should be allowed on a per-connection basis using a new setting in the LDAPConnectionOptions class.

  • A number of new SSL trust managers have been added, including one which looks only at the validity dates of the presented certificate, another that looks at the hostname of the certificate (either in the CN subject attribute or a subjectAltName extension), and an aggregate trust manager that can be used to decide whether to trust a certificate based on the combined results of a set of trust managers. Also, the prompt trust manager has been updated to display additional information about the certificate to allow the user to make more informed decisions about whether to trust the certificate.

  • Support for the SASL EXTERNAL bind request has been updated to make it possible to either include or exclude the SASL credentials element. This makes it possible to work with directory servers which require SASL credentials as well as those which do not expect them for EXTERNAL requests.

  • We have added a new server set implementation which will attempt to simultaneously connect to multiple servers, and will return the first connection it was able to establish. While this may increase the load across all servers at the time of the connection attempt, it helps ensure the lowest possible delay when trying to establish a connection to one of a set of servers.

  • The LDIF reader has been updated to provide better control over how to handle lines with unexpected trailing spaces, and also to make it possible to handle reading data from file with relative paths rather.

  • The searchrate, modrate, and search-and-modrate tools have been updated to make it possible to periodically close and re-establish connections to the server after a specified number of operations.

  • Fixed a corner case bug resulting from an application which attempted to use multiple resource files with the same paths. For example, if an application tried to use a properties file named "ldap.properties" or "util.properties", there may be a conflict between the version of that file used by the application and the one provided by the UnboundID LDAP SDK for Java. The names of the properties files used by the LDAP SDK have been renamed to avoid the possibility of conflicting with those which may have been used by other applications.

Thursday
Dec012011

UnboundID LDAP SDK for Java 2.3.0

We have just released the UnboundID LDAP SDK for Java 2.3.0. It's been about six months since the last release, and there are several new features and bug fixes. It is available for download now from the UnboundID website and the SourceForge project page, and it's also available in the Maven central repository.

The release notes contain a pretty comprehensive set of changes since the 2.2.0 release, but some of the most significant changes are as follows:

  • It is now possible to use DNS SRV records (as described in RFC 2782) to automatically discover available LDAP servers in the environment. The implementation will respect defined priorities and weights, and can be used for individual connections or connection pools.

  • Experimental support has been added for the password policy control (as defined in draft-behera-ldap-password-policy) and the no-operation control (as defined in draft-zeilenga-ldap-noop). Even though these drafts are not necessarily finalized, some servers (including the UnboundID Directory Server) have implemented support for them, so it is useful to be able to access them through the LDAP SDK.

  • The schema caching mechanism (which makes it possible for client-side determinations to use server schema for more appropriate matching) has been made much more efficient so that multiple connections to the same server and with equivalent perceptions of the schema will reference the same object rather than having separate equivalent objects.

  • Updated the LDAP SDK so that operations invoked via the asynchronous API will still respect client-side timeouts.

  • A number of schema-related changes have been made to the in-memory directory server. You can now update the schema dynamically through LDAP modify operations. Supported syntaxes and matching rules are now advertised (at least when using the default standard schema). You can configure the server to allow attribute values which violate the associated attribute syntax, or to allow entries with multiple structural object classes (or no structural class at all).

  • The in-memory directory server has been updated with support for equality indexes to help speed up certain kinds of search operations (particularly when dealing with more than a handful of entries).

  • The in-memory directory server has been updated to always use the "dn:" form in authorization identity response controls. Previously, it could use the "u:" form in responses to SASL PLAIN binds that used the "u:" form in the request. It will also now use the correct value of "" instead of "dn:" to indicate the anonymous authorization identity.

  • It is now possible to customize the values that will be displayed for the vendorName and vendorVersion attributes in the root DSE. This can help the server more effectively fool applications which are coded to only work with certain directories.

  • The LDAP SDK persistence framework has been updated so that it supports attributes with options (e.g., "userCertificate;binary"). It is now also possible to specify superior object classes that should be included in entries that are created.

  • The connection pool implementation has been updated to provide better closed connection and unsolicited response detection for connections operating in synchronous mode.

  • The 2.2.0 release added support for using a newly-created connection to retry operations that failed in a manner that indicated the connection may no longer be valid. In the 2.3.0 release, it is now possible to configure that capability based on the type of operation being processed, whereas in the previous version all operation types were handled in an identical manner.

  • The LDIFReader has new convenience methods that can be used to read the contents of an LDIF file and retrieve the contents as a list of entries. This can be convenient when working with small LDIF files, especially for testing purposes.

  • The LDAP SDK now supports parsing LDAP URLs with an "ldapi" scheme. The LDAP SDK does not provide support for LDAPI (LDAP over UNIX domain sockets) in the out-of-the-box configuration, but it can now parse URLs using an "ldapi" scheme.

  • Command-line tools have been updated so that they can specify a tool version. If this is used, then the LDAP SDK can automatically add a "--version" argument to such tools which will cause the version string to be printed to the terminal.

  • Some changes were made to help the LDAP SDK be more fully functional on IBM Java VMs. This includes necessary changes to support GSSAPI on IBM VMs, and a workaround for an apparent bug that could result in exceptions from concurrent calls to SocketFactory.createSocket methods.

Tuesday
Jun142011

The Problems with Twitter's Automatic URL Shortening

At the beginning of 2010, I decided to start writing up my thoughts on all of the first-run movies that I see in the theater. It's debatable about whether those reviews are any good, but I know that at least some people read them. All of my reviews from the last year and a half are available at http://www.viewity.com/.

Last Thursday, I saw (but did not particularly enjoy) J.J. Abrams' new movie Super 8, and last night I finally got around to writing my review of it, which I posted at http://www.viewity.com/reviews/super-8.html. I use Squarespace to host the reviews, and one of the services it provides is the ability to define a shorter URL that can be used to reference the content. I took advantage of this and created the path "/Super8" instead of "/reviews/super-8.html". Squarespace also offers support for using multiple domains with the same account, and I have "vwty.us" in addition to "viewity.com". What this ultimately means is that going to "http://vwty.us/Super8" will take you to "http://www.viewity.com/reviews/super-8.html".

Whenever I post a new review, one of the ways I let people know about it is by Twitter. The whole reason that I offer the shorter version of the URL is that Twitter limits posts to a maximum of 140 characters, and at 21 characters, the short version of the URL is less than half the size of the 43-character long form. This gives me more space to say something about the movie in addition to merely providing the link, and I try to give at least a hint about whether I liked it. For Super 8, the tweet that I composed was:

Super 8 is super underwhelming. http://vwty.us/Super8

However, what actually got tweeted was:

Super 8 is super underwhelming. http://t.co/TZ43SmY

I will grant you that what Twitter actually made available on my behalf is a whopping two characters shorter. However, it is also much worse than what I had originally written, for many reasons.

First, it's completely unnecessary. As I mentioned before, Twitter places restrictions on the length of your tweets, but I wasn't anywhere near that. What I originally wrote was 89 characters, which means that I could have written up to 51 more characters before running out of space. I could have even used the original 43-character URL if I had wanted to and still had plenty of space left.

Second, Twitter's change dramatically obscures the URL. From the URL that I provided, you can tell that it goes to the vwty.us domain (which is a brand that I control and want to be associated with), and the "/Super8" path gives you a pretty good idea what it might be about. On the other hand, with what Twitter actually provided, you can see that it goes to the "t.co" domain (which is known to be a redirect farm so you have no idea where the content actually resides), and the path "/TZ43SmY" tells you nothing about the content. The original URL is very useful. The shortened version is not.

Another significant problem is that the new URL shortener can have a dramatic impact on the availability of your content. Twitter has such a bad reputation in this area that their "fail whale" page is a well known Internet meme. Because a click on the shortened URL must go through Twitter's servers before sending you to the ultimate destination, if Twitter is having a problem then it can make your content unavailable. As if by fate, when I clicked on the t.co link earlier this morning, I got exactly that failure page telling me that Twitter was over capacity. Nice. Even if it had worked, it still requires an extra HTTP request and more data over the wire, and an unnecessary delay in getting to the actual content.

The requirement to go through Twitter's service creates even more ways that the content could become unavailable. It's likely that tweets will outlive Twitter itself. They're being archived in the Library of Congress (in addition to a number of other sites), and although future generations probably don't care how I feel about a movie, there could be long-term value in tweets, and links contained in them. If Twitter goes out of business or is otherwise shut down, then their links won't work anymore even if the content they referenced is still available. Also, it's worth pointing out that the ".co" TLD is controlled by the government of Columbia, and that government can shut down such URLs at any time. The government of Lybia has done this for ".ly" domains, so it's certainly not beyond the realm of possibility.

Twitter's reason for providing this service is that it can "better protect users from malicious sites that engage in spreading malware, phishing attacks, and other harmful activity". While this sounds noble, it is also completely ineffective against everyone except the most extreme idiots. They've already stated that they won't shorten URLs that were already shortened using other services like bit.ly, so there's nothing to prevent people doing suspicious things from using one of them for their posts. Further, there's nothing to prevent me from serving up different content from my server when I can see that the request is coming from Twitter's malware detection service versus some other content, so I could still serve up bad stuff to people following the links. On the other hand, the fact that they are trying to verify that content is safe introduces a very real possibility for false positives. My site could have completely legitimate and safe content, but if Twitter thinks that it's bad for some reason then that may significant inhibit the likelihood of people to go there. Given the unacceptably high percentage of false positives I see from other services like this (e.g., Google mail's spam detection frequently flags things that aren't spam), this is far from an impossibility.

Finally, in the ultimate act of inanity, Twitter's URL shortener can actually produce URLs that are longer than the original URL. For example, when I entered a URL of "http://t.co", Twitter "shortened" it to be "http://t.co/IzZPmi2".

I realize that Twitter will show an expanded version of the URL in its web interface, but that doesn't work for alternate clients. For example, when I use Seesmic on my Android phone, I get the t.co version. And even if I'm using a client that automatically expands that URL, it will only work if the shortening service is available.

Great job, Twitter. This "feature" that I can't disable has made my links less available, less recognizable, and more likely to be flagged as malicious content. I don't need any more hurdles to have to get by for people to read the useless drivel that I write.

Tuesday
May312011

Comparing Java LDAP SDK Performance

At UnboundID, we take performance seriously and are always trying to improve. This applies just as much for our client-side SDK as for our server products, since a server isn't very useful without client applications to take advantage of it. There are a number of tools that can be used to measure various aspects of directory server performance, but it's not as simple to measure the performance of client-side libraries.

To help address this problem, I've written a simple tool that can be used to perform a number of different kinds of LDAP operations using various Java-based LDAP SDKs. It's not particularly elaborate and there's only a command-line interface, but it provides a range of options, including which SDK to use, the type of operation to test, the number of concurrent threads to use, the length of time to run the test (and to warm-up before beginning to collect data), the type of entries to use, the number of entries to return per search, the result code that should be returned for each operation, and how frequently to close and re-establish connections.

It's obviously the case that, as the primary developer for the UnboundID LDAP SDK for Java, I am more than a little biased about which SDK I prefer. However, to the best of my knowledge the way that the tool performs the tests is as fair as possible and uses the most efficient mechanism offered by each of the libraries. If anyone believes that there is a more efficient way to use any of the SDKs, then I'd be happy to hear about it and update the results accordingly.

At present, the tool provides at least some level of support for the following SDKs:

  • Apache LDAP API, version 1.0.0-M3. Although I have written code in the hope of testing this SDK, it does not appear to be fully functional at the present time. For example, when trying to perform searches with multiple threads using a separate connection per thread (which is the only way I have used it to this point), it looks like only a single thread is actually able to perform searches and all the others throw timeout exceptions. If anyone knows how to work around that, I'd be happy to hear about it. Until this problem is resolved, this tool isn't very useful for testing its performance.

  • JNDI, as is included in Java SE. For my testing, I used Java SE 1.6.0_25. JNDI is a very abstract API that has the ability to communicate using a number of protocols, and as such was not designed specifically for LDAP. Unfortunately, this means that it's not ideal for LDAP in a lot of ways. For example, it doesn't appear that JNDI provides any way to get the actual numeric LDAP result code returned by the server in response to various operations, and it also looks like it does not support bind (for the purpose of authenticating clients) as a distinct type of operation but only in the course of establishing a connection or re-authenticating before performing some other kind of operation. As such, the performance testing tool does not support bind operations, and it does not support testing with operations that return non-successful responses because the result code cannot be verified.

  • Netscape Directory SDK for Java, version 4.17 (compiled from source, as there does not appear to be a download for a pre-built version of the library). This SDK is fully supported by the performance testing tool.

  • Novell LDAP Classes for Java, also known as JLDAP, version 2009.10.07-1. This SDK is fully supported by the performance testing tool.

  • OpenDJ LDAP SDK, version 3.0.0 (snapshot build from May 28, 2011). This appears to be a fork of the OpenDS LDAP SDK that has had the package structure changed, and may have some number of additional changes as well. However, I was not able to successfully use this SDK to run any tests because the code that I used (despite identical code working for the OpenDS SDK, with the exception of changing the package names in import statements) threw an exception when trying to run, indicating that it was attempting to subclass a class that had previously been declared final. It also appeared to be missing an org.forgerock.i18n.LocalizedIllegalArgumentException class, although I worked around that problem by writing my own version of that class.

  • OpenDS LDAP SDK for Java, 0.9.0 build from May 26, 2011. This SDK is fully supported by the performance testing tool. In addition, because the API provides options for both synchronous and asynchronous connections, the "--useSynchronousMode" option is supported to request using the synchronous version of the API that does not support the use of abandon or multiple concurrent operations on the same connection, while omitting this argument will use a version that does support this capability.

  • UnboundID LDAP SDK for Java, version 2.2.0. This SDK is fully supported, including the use of the "--useSynchronousMode" option.

These tests obviously require communication with an LDAP directory server. Because the intention is not to measure the performance of the directory server but rather the SDK being used to communicate with that server, it is ideal to use something that is as fast as possible (so that the server is not a bottleneck) and that can be manipulated to give an arbitrary response for any operation. For this purpose, a custom server was created using the LDAP Listener API provided as part of the UnboundID LDAP SDK for Java. It is important to note, however, that even though this API is part of the UnboundID LDAP SDK, it can be used with any kind of client and all interaction with it was over the LDAP protocol using a socket connected over the test system's loopback interface. The UnboundID LDAP SDK did not have any advantage over any other SDK when interacting with this server.

All of the tests that I ran used Java SE 1.6.0_25 (64-bit version) on a system with a 2.67GHz 8-core Intel Core i7 CPU with 12GB of memory, running a fully-patched version of Ubuntu Linux version 11.04. A detailed description of each of the tests that I ran is provided below, along with the results that I obtained. Each test was run with the JNDI, Netscape, Novell, OpenDS, and UnboundID SDKs, using 1, 2, 4, 8, 16, 32, 64, and 128 client threads. For the OpenDS and UnboundID SDKs, tests were run using both the asynchronous and synchronous modes of operation.

Add Operation Performance

When processing add operations, performance may vary based on the size of the entry being added to the server. As such, I ran two different add performance tests: a "normal-sized" entry (an inetOrgPerson entry with 15 attributes) and a "large" entry (a groupOfUniqueNames entry with 1000 uniqueMember values).

The results I measured when running these tests were:

Add Throughput for Normal-Sized Entries

API Highest Normal-Sized Entry Add Throughput
JNDI 89,027.954 adds/sec
Netscape 98,916.582 adds/sec
Novell 86,766.964 adds/sec
OpenDS (asynchronous mode) 88,525.069 adds/sec
OpenDS (synchronous mode) 88,586.290 adds/sec
UnboundID (asynchronous mode) 142,105.659 adds/sec
UnboundID (synchronous mode) 174,665.853 adds/sec

 

Add Throughput for Large Entries

SDK Highest Large Entry Add Throughput
JNDI 6,472.209 adds/sec
Netscape 8,723.301 adds/sec
Novell 7,437.703 adds/sec
OpenDS (asynchronous mode) 9,454.340 adds/sec
OpenDS (synchronous mode) 9,747.643 adds/sec
UnboundID (asynchronous mode) 17,602.504 adds/sec
UnboundID (synchronous mode) 18,545.810 adds/sec

 

From these tests, it appears that the UnboundID LDAP SDK for Java is significantly faster than any of the other SDKs when processing add operations, and using the UnboundID LDAP SDK in synchronous mode provides a notable performance improvement over the default asynchronous mode. In contrast, the OpenDS LDAP SDK does not appear to exhibit a significant difference in add performance based on whether the asynchronous or synchronous version of the API is selected.

Search Operation Performance

As for adds, search operation performance can vary significantly based on the size of the entry being returned. As such, I ran tests search tests using the same "normal-sized" and "large" entries as for the add operation testing, and I also tested the case of returning only a single attribute in each entry. Further, because the server can return multiple entries for a single search operation, I ran tests with both operations returning a single entry and 100 identical entries. Results from those tests are provided below:

Search Throughput for 1 Tiny Entry

SDK Highest Search Throughput for 1 Tiny Entry
JNDI 54,272.423 searches/sec
Netscape 74,601.755 searches/sec
Novell 69,686.323 searches/sec
OpenDS (asynchronous mode) 73,964.204 searches/sec
OpenDS (synchronous mode) 74,572.779 searches/sec
UnboundID (asynchronous mode) 109,159.192 searches/sec
UnboundID (synchronous mode) 168,315.209 searches/sec

 

Search Throughput for 1 Normal-Sized Entry

SDK Highest Search Throughput for 1 Normal-Sized Entry
JNDI 46,355.770 searches/sec
Netscape 49,668.681 searches/sec
Novell 55,988.055 searches/sec
OpenDS (asynchronous mode) 55,408.763 searches/sec
OpenDS (synchronous mode) 54,923.308 searches/sec
UnboundID (asynchronous mode) 83,846.853 searches/sec
UnboundID (synchronous mode) 115,738.348 searches/sec

 

Search Throughput for 1 Large Entry

SDK Highest Search Throughput for 1 Large Entry
JNDI 11,045.600 searches/sec
Netscape 3,849.413 searches/sec
Novell 748.249 searches/sec
OpenDS (asynchronous mode) 10,449.903 searches/sec
OpenDS (synchronous mode) 10,374.687 searches/sec
UnboundID (asynchronous mode) 20,645.026 searches/sec
UnboundID (synchronous mode) 21,341.607 searches/sec

 

Search Throughput for 100 Tiny Entries

SDK Highest Search Throughput for 100 Tiny Entries
JNDI 5,749.687 searches/sec
Netscape 2,768.797 searches/sec
Novell 2,739.363 searches/sec
OpenDS (asynchronous mode) 8,295.155 searches/sec
OpenDS (synchronous mode) 8,315.379 searches/sec
UnboundID (asynchronous mode) 5,566.711 searches/sec
UnboundID (synchronous mode) 7,265.108 searches/sec

 

Search Throughput for 100 Normal-Sized Entries

SDK Highest Search Throughput for 100 Normal-Sized Entries
JNDI 1,983.319 searches/sec
Netscape 875.249 searches/sec
Novell 1,681.767 searches/sec
OpenDS (asynchronous mode) 1,959.581 searches/sec
OpenDS (synchronous mode) 1,917.131 searches/sec
UnboundID (asynchronous mode) 2,308.414 searches/sec
UnboundID (synchronous mode) 3,278.463 searches/sec

 

Search Throughput for 100 Large Entries

SDK Highest Search Throughput for 100 Large Entries
JNDI 127.716 searches/sec
Netscape 39.667 searches/sec
Novell 6.731 searches/sec
OpenDS (asynchronous mode) 117.633 searches/sec
OpenDS (synchronous mode) 117.233 searches/sec
UnboundID (asynchronous mode) 225.800 searches/sec
UnboundID (synchronous mode) 237.400 searches/sec

 

In this case, there is a significant variation in many of the SDKs based on the size and number of entries being returned. The UnboundID LDAP SDK is significantly faster than the other SDKs in most cases (with a notable improvement on top of that when using synchronous mode), but the OpenDS SDK is quite a bit faster than the UnboundID LDAP SDK in the case of a search returning 100 entries with only a single attribute per entry, but that is not the case for normal-sized or large entries. On the other hand, both the Netscape and Novell SDKs appear to be extremely slow when dealing with large search result entries, and the Netscape SDK is also much slower than the others for large entry sets. It is also important to note the significant drop in search performance when using JNDI for larger numbers of threads when returning a single tiny or normal-sized entry.

Modify Operation Performance

For a client SDK, modify performance has fewer variables than for either add or search operations. For a directory server, there are a number of factors, including the size of the target entry, the size of the modified attributes, and whether any of the target attributes is indexed, but none of these has an impact on the client. It is certainly the case that a modify request could update a large number of attributes and/or attribute values, but generally clients modify only one or two values at a time. As such, the only modify test run was for a modify operation replacing a single attribute value. Results for this test are:

Modify Throughput

SDK Highest Modify Throughput
JNDI 139,593.362 searches/sec
Netscape 139,362.802 searches/sec
Novell 122,199.697 searches/sec
OpenDS (asynchronous mode) 109,556.189 searches/sec
OpenDS (synchronous mode) 109,832.363 searches/sec
UnboundID (asynchronous mode) 196,411.917 searches/sec
UnboundID (synchronous mode) 242,248.797 searches/sec

 

Again, the UnboundID LDAP SDK is significantly faster than the other SDKs, and again, there is a significant advantage to using synchronous mode. The Novell SDK's modify performance seems to drop off significantly for higher numbers of threads.

Summary

Download a complete set of results for all tests run as an OpenDocument spreadsheet.

In most cases, the UnboundID LDAP SDK for Java is faster than all other SDKs by a wide margin, and in all cases using the UnboundID LDAP SDK for Java in synchronous mode was faster than using the SDK in the default asynchronous mode. As such, if you are using the UnboundID LDAP SDK for Java and don't need to perform asynchronous operations, then it is highly recommended that you enable synchronous mode for connections used by that application.

The only case in which the UnboundID LDAP SDK for Java was not the fastest is for a search operation in which a large number of entries were returned with only a single attribute per entry. It was the fastest for both other tests involving a large number of entries, and it was also the fastest for returning only one entry with a single attribute. I will investigate the UnboundID LDAP SDK's performance in this area to determine whether it can be improved.

The OpenDS LDAP SDK (which was started after I left OpenDS, and for which I have not participated in its development in any way) appears to be the second fastest. It was the only SDK to perform faster than the UnboundID LDAP SDK in any of the tests, and it was never the slowest of any of the tests. There does not appear to be any measurable difference in performance when using the synchronous mode versus asynchronous mode. Across all of the tests, the OpenDS LDAP SDK achieved about 56.0% of the overall performance of the UnboundID LDAP SDK in synchronous mode, and 67.8% of the performance of the UnboundID LDAP SDK in asynchronous mode.

JNDI offers middle-of-the-pack performance in most cases, but its very poor showing for searches returning a single entry with high numbers of threads may be a significant cause for concern, since this is a very common kind of operation.

The Novell SDK performance when dealing with large search result entries is very troublesome, and it is also significantly slower than all other SDKs for modify operations with a high degree of concurrency. The Netscape SDK also appears to have problems with large search result entries, and its search performance for searches returning multiple entries is a problem as well.

Monday
May232011

UnboundID LDAP SDK for Java 2.2.0

UnboundID LDAP SDK for Java 2.2.0 has just been released and is available for download from the UnboundID website or the SourceForge project page, and is also available in the Maven central repository.

The release notes provide a full overview of the changes in this release over the previous 2.1.0 version. There are several bug fixes, but some of the most notable new features include:

  • A new Minimal Edition has been introduced. The Minimal Edition is available under the same licenses as the Standard Edition and provides support for all LDAP operations, but a number of capabilities have been removed (e.g., support for SASL authentication, a number of controls and extended operations, the persistence framework, the listener framework and in-memory directory server, JNDI and Netscape SDK migration support, etc.). The primary goal of the Minimal Edition is to provide a version of the LDAP SDK with a small jar file size which is desirable for resource-constrained environments like Android applications or other embedded use. The Minimal Edition is available as a separate download, from either the UnboundID website or SourceForge project.

  • Connection pooling support has been updated to provide the ability to automatically retry operations if the first attempt fails in a way that indicates the connection may no longer be valid. In such cases, a new connection will be established (potentially to a different server, based on the ServerSet in use for the pool) and the operation will be re-attempted on that connection. This can help isolate applications from failures if one of the target directory servers is shut down, crashes, hangs, or begins behaving erratically.

  • The in-memory directory server has been updated to add support for maintaining referential integrity (e.g., so that if an entry is deleted then that user can be automatically removed from any static groups in which the user was a member), to support LDAP transactions as described in RFC 5805, and to add support for inserting an arbitrary delay before processing operations (which can be useful in simulating environments with slower response times or higher network latencies). There have also been a couple of fixes for bugs that could cause the in-memory directory server to behave incorrectly.

  • The LDAP SDK persistence framework has been updated to provide better support for searches. Previously, it was difficult to search for entries using anything but equality searches. The generate-source-from-schema tool has been updated so that it will now generate additional methods that can make it easier to perform other kinds of searches, including presence, substring (starts with, ends with, and contains), greater-or-equal, less-or-equal, and approximately-equal-to.

  • New methods have been added which make it significantly easier to interact with response controls. Each response control class now has one or more static get methods that can be used to extract and decode a response control of that type from a given response object.

  • Support for GSSAPI authentication has been significantly improved to add support for a number of new options, including the ability to indicate whether to use (or even require) a ticket cache, to specify an alternate location for the ticket cache file, and to request that the TGT be renewed. Changes have also been introduced to make it easier to access GSSAPI debugging information.

  • A new option has been added that makes it possible to automatically send an abandon request to the directory server if a client-side timeout is encountered while waiting for a response to an operation. Previously, the LDAP SDK would throw an exception but did not have any option to attempt to abandon the operation in the directory server.

  • The LDAP SDK can now use schema information (if available) in the process of normalizing and comparing DNs and RDNs. This can provide more accurate matching for DNs that use attributes in which something other than case-inexact string matching should be used.

  • The LDIF reader has been updated to provide the ability to read data from multiple files. This can be useful for cases in which the complete set of data you want to process is broken up into multiple files (e.g., representing different portions of the DIT).