ForgeRock Directory Services 7.2 has been released

ForgeRock Directory Services 7.2 was and will be the last release of ForgeRock products that I’ve managed. It was finished when I left the company and was released to the public a few days after. Before I dive into the changes available in this release, I’d like to thank the amazing team that produced this version, from the whole Engineering team led by Matt Swift, to the Quality engineering led by Carole Forel, the best and only technical writer Mark Craig, and also our sustaining engineer Chris Ridd who contributed some important fixes to existing customers. You all rock and I’ve really appreciated working with you all these years.

So what’s new and exciting in DS 7.2?

First, this version introduces a new type of index: Big Index. This type of index is to be used to optimize search queries that are expecting to return a large number of results among an even much larger number of entries. For example, if you have an application that searches for all users in the USA that live in a specific state. In a population of hundreds of millions users, you may have millions that live in one particular state (let’s say Ohio). With previous versions, searching for all users in Ohio would be unindexed and the search if allowed would scan the whole directory data to identify the ones in Ohio. With 7.2, the state attribute can be indexed as a Big Index, and the same search query would be considered as indexed, only going through the reduced set of users with that have Ohio as the value for the state attribute.

Big Indexes can have a lesser impact on write performances than regular indexes, but they tend to have a higher on disk footprint. As usual, choosing to use a Big Index type is a matter of trade-of between read and write performances, but also disk space occupation which may also have some impact on performances. It is recommended to test and run benchmarks in development or pre-production environments before using them in production.

The second significant new feature in 7.2 is the support of the HAProxy Protocol for LDAP and LDAPS. When ForgeRock Directory Services is deployed behind a software load-balancer such as HAProxy, NGINX or Kubernetes Ingress, it’s not possible for DS to know the IP address of the Client application (the only IP address known is the one of the load-balancer), therefore, it is not possible to enforce specific access controls or limits based on the applications. By supporting the HAProxy Protocol, DS can decode a specific header sent by the load-balancer and retrieve some information about the client application such as IP address but also some TLS related information if the connection between the client and the load-balancer is secured by TLS, and DS can use this information in access controls, logging, limits… You can find more details about DS support of the Proxy Protocol in DS documentation.

In DS 7.2, we have added a new option for securing and hashing passwords: Argon2. When enabled (which is the default), this allows importing users with Argon2 hashed passwords, and letting them authenticating immediately. Argon2 may be selected as well as the default scheme for hashing new passwords, by associating it with a password policy (such as the default password policy). The Argon2 password scheme has several parameters that control the cost of the hash: version, number of iterations, amount of memory to use and parallelism (aka number of threads used). While Argon2 is probably today the best algorithm to secure passwords, it can have a very big impact on the server’s performance, depending on the Argon2 parameters selected. Remember that DS encrypts the entries on disk by default, and therefore the risk of exposing hashed passwords at rest is extremely low (if not null).

Also new is the ability to search for attributes with a DistinguishedName syntax using pattern matching. DS 7.2 introduces a new matching rule named distinguishedNamePatternMatch (defined with the OID 1.3.6.1.4.1.36733.2.1.4.13). It can be used to search for users with a specific manager for example with the following filter “(manager:1.3.6.1.4.1.36733.2.1.4.13:=uid=trigden,**)” or a more human readable form “(manager:distinguishedNamePatternMatch:=uid=trigden,**)”, or to search for users whose manager is part of the Admins organisational unit with the following filter “(manager:1.3.6.1.4.1.36733.2.1.4.13:=*,ou=Admins,dc=example,dc=com)”.

ForgeRock Directory Services 7.2 includes several minor improvements:

As with every release, there has been several performances optimizations and improvements, many minor issues corrected.

You can find the full details of the changes in the Release Notes.

I hope you will enjoy this latest release of ForgeRock Directory Services. If not, don’t reach out to me, I’m no longer in charge. 😀

Directory Services – Docker, Kubernetes: Friends or Foes?

Two weeks ago, at the ForgeRock Identity Live conference, I did a talk about ForgeRock Directory Services (DS) in the Docker/Kubernetes (K8S) world, trying to answer the question whether DS and Docker/K8S were friends or foes.

Before I dive into the question, let me say that it’s obvious that our whole industry is moving to the Cloud, and that Docker/Kubernetes are becoming the standard way to deploy software in the Cloud, in any Cloud. Therefore whether DS and K8S are ultimately friends or foes is not the right question. I believe it is unavoidable and that in the near future we will deploy and fully support Directory Services in K8S. But is it a good idea to do it today? Let’s examine why we are questioning this today, what are the benefits of using Kubernetes to deploy software, what are the constraints of deploying the current version of Directory Services (6.5) in Kubernetes, and what ForgeRock is working on to improve DS in K8S. Finally I will highlight why Directory Services is a good solution to persist data, whether it’s on premise or in the Cloud. 

Why the discussion about DS and K8S?

The main reason we are having this discussion is due to the nature of Directory Services. DS is not the usual stateless web application. Directory Services is both a stateful application and a distributed one. These are two main aspects that require special care when trying to deploy in containers. First Directory Services is a stateful application because it is the place where one can store the state for all these stateless web-applications. In our platform, we use DS to store ForgeRock Access Management data, whether it’s runtime configuration data, tokens and user identities. Second Directory Services is a distributed application because instances need to talk with each other so that the data is replicated and consistent. Because databases and distributed applications require stronger orchestration and coordination between elements of the system, they are implemented as Stateful Sets in the Kubernetes world, and make use of Persistent Volumes (PV). Therefore our Cloud Deployment Model of ForgeRock Directory Services is also implemented this way.

It’s worth noting that Persistent Volume is a Kubernetes API and there are several types of volumes and many different providers implementations. Some of the PV types are very recent and still beta versions. So, when using Kubernetes for applications that persist data, you should have a good understanding of the characteristics and the performance of the Persistent Volumes choices that are available in your environment.

Benefits of Containers and Kubernetes

Developers are making a great use of containers because it simplifies focus on what they have to build and test. Instead of spending hours figuring how to install and configure a database, and build a monitoring platform to validate their work, they can pull one or more docker images that will automate this task.

When going into production, the automation is a key aspect. Kubernetes and its family of tools, allow administrators to describe their target architectures, automate deployment, monitoring and incident response. Typically in a Kubernetes cluster, if the administrator requires at least 3 instances of an application, Kubernetes will react to the disappearance of an instance and will restart a new one immediately. Another key benefit of Kubernetes is auto-scalability. The Kubernetes deployment can react to monitoring alerts or external signals to add or remove instances of an application in order to support a greater or smaller workload. This optimises the cost of running the solution, balancing the capacity to absorb peak loads with the cost of running at normal or low usage levels.

Directory Services 6.5 constraints in K8S

But auto-scaling is not something that is suitable to all applications, and typically Directory Services, like most of the databases, does not scale automatically by adding more running instances. Because databases have state and data, and expect exclusive access to the files, adding a new replica is a costly operation. The data needs to be duplicated in order to let another instance using it. Also, adding a Directory Services instance only helps to scale read operations. A write operation on any server will need to be replicated to all other servers. So all servers will have the same write throughput and the same amount of disk I/Os. In the world of databases, the only way to scale write operations is to distribute (shard) the data to multiple servers. Such capability is not yet available in Directory Services, but it’s planned for future releases. (Note that Directory Proxy Services 6.5 already has support for sharding, but with some constraints. And the proxy is not yet part of the Cloud Deployment Model).

Another constraint of Directory Services 6.5 is how replication works. The DS replication feature was designed years ago when customers would deploy servers and would not touch them unless they were broken. Servers had stable hostnames or IP addresses and would know all of their peers. In the container world, the address of an instance is only known after the instance is started. And sometimes you want to start several instances at the same time. The current ForgeRock Cloud Deployment Model and the Directory Services docker images that we propose, work around the design limitation of replication management, by pre-configuring replication for a fixed (and small) maximum number of replicas. It’s not possible to dynamically add another replica after that. Also, the “dsreplication” utility cannot be used in Kubernetes. Luckily, monitoring replication and more importantly its latency is possible with Prometheus which is the default monitoring technology in Kubernetes.

Coming Improvements in Directory Services

For the past year, we’ve been working hard on redesigning how we manage and bootstrap replication between Directory Services instances. Our main challenge with that work has been to do it in a way that allows us to continue to replicate with previous versions. Interoperability and compatibility of replication between different versions of Directory Services has been and will remain a key value of the product, allowing customers to roll out new versions with zero downtime of the service. We’re moving towards using full CA-based certificates and mutual TLS authentication for establishing trust between replicas. Configuring a new replica will no longer require updating all servers in the topology, and replicas that are uninstalled or stopped for some time will be automatically removed from the topology (and so will be their associated change logs and meta-data). When starting a new replica, it will only need to know of one other running replica (or be told that it is the first one). These changes will make automating the deployment of new replica much simpler and remove the limit to the number of replicas. We are also improving the way we are doing backup and restore of a database backend or the whole server, allowing to directly use cloud buckets such as S3 or GCS. All of these things are planned for the next major release due in the first half of 2020. Most of these features will be used by our own ForgeRock Identity Platform as a Service offering that will go in stages of Early Access and Beta later this year.

Once we have the ability to fully automate the deployment and the upgrade of a cluster of Directory Services instances, in one or more data-centres, we will start working on horizontal scalability for Directory Services, and provide a way to scale the number of servers as the data stored grows, allowing a consistent level of write throughout. All of this fully automated to be deployed in the Cloud using Kubernetes.

Benefits of using Directory Services as a data store

Often people ask me why they should use ForgeRock Directory Services rather than a real database. First of all, Directory Services is a database. It’s a specialised database, built on a standard data model and a standard access protocol: Lightweight Directory Access Protocol aka LDAP. Several people in the past have pointed out that LDAP might have even been the first successful NoSQL database! 🙂  Furthermore, Directory Services also exposes all of the data through a REST/JSON API, yet still providing the same security and fine grained access controls mechanisms as through LDAP. But the main value of Directory Services is that you can achieve very high availability of the data (in the 5 9’s), using standard systems (whether they are bare metal systems or virtual hosts or containers), even with world wide geographic distribution. We have many customers that have deployed a single directory services distributed in 3 to 6 data centers around the globe. The LDAP data model has a flexible schema that can be extended, customised without having to rebuild the database nor even restart the servers. The data can even be exposed through versioned APIs using our REST API. Finally, the combination of flexible and extensive schema with fine-grained access controls, allow multiple applications to access the data, but with great control of which application can read or write which data. This results in a single identity and credentials for a user, but multiple sets of attributes, that can be shared by applications or restricted to a single one: a single central view of the user that is then easier and more cost effective to manage.

Conclusion

Back to the track of Kubernetes, and because of the constraints of the current Directory Services Cloud Deployment Model with version 6.5, we would recommend that you try to keep your Directory Services deployed in VMs or on bare metal. But with the next release which underpins the ForgeRock Cloud offering, we will fully support deploying Directory Services on Docker/Kubernetes. We will continue our investment in the product to be able to support Auto-Scaling (using data sharding) in subsequent releases. Building these solutions is not extremely difficult, but we need time to prove that it’s 100% reliable in all conditions, because in the end, the most wanted and appreciated feature of ForgeRock Directory Services is its reliability.

ForgeRock DS and the LDAP Relax Rules Control

In ForgeRock Directory Services 6.5, we’ve added the support for the LDAP Relax Rules Control, both on the server and our clients. One of my colleagues, involved with the customers’ deployment, asked me why we’ve added the control and what it should be used for.

The LDAP Relax Rules Control is an LDAP extension that allows a directory user agent (a client) to request the directory service to temporarily relax enforcement of various data and service model rules. The internet-draft is explicit about which rules can be relaxed or not. But typically it can be used to allow a client to write specific operational attributes that should be read-only and managed by the server.

Starting with OpenDJ 3.0, we’ve removed the ability to bulk import LDIF data to a server while preserving the existing data (the “append mode”). First, performing an import-ldif in append mode was breaking replication. The import needed to be applied to all replica, while no change was to happen on the new data. The process was cumbersome, especially when having multiple data-centers. But also, removing this feature allowed us to have a more generic interface and implement multiple backend using different underlying key-value stores.

But we have a few customers that have the need to seldom bulk load a large set of users to their directory service. In DS 6.0, we’ve added an option to speed bulk operations using ldapmodify or ldapdelete: –numConnections. Instead of serialising all updates or adds contained in an LDIF file, the tool will run them in parallel across multiple connections, while also controlling dependencies of changes. With this options, some of our customers have added several millions of users to their replicated directory services in minutes. By controlling the number of connections, one can also balance the need for speed of bulk loading data against the need to keep bandwidth for the regular client applications.

Doing bulk updates over LDAP is now fast, but some customers used the import process to also carry over some attributes that are usually managed by the directory server and thus read-only, such as the CreateTimeStamp, the CreatorsName.

And this is specifically what the Relax Rules Control is meant to allow.

So, if you have a need to bulk load large set of data, or synchronise over LDAP data from another server, and need to preserve some of the operational attribute, you can use the Relax Rules Control as illustrated below. Note that the OID for the control is 1.3.6.1.4.1.4203.666.5.12 but ForgeRock DS tools also recognise the RelaxRules string alias.

$ ldapmodify -p 1389 -D cn=directory\ manager -w secret12
-J RelaxRules:true --numConnections 4 ../50Kusers.ldif
...
ADD operation successful for DN uid=user.10021,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10022,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10001,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10020,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10026,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10025,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10024,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10005,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10033,ou=People,dc=example,dc=com
ADD operation successful for DN uid=user.10029,ou=People,dc=example,dc=com
...

Note that because the Relax Rules Control allows to override some of the rules enforced normally by the server, it’s important to control and restrict which clients or users are allowed to make use of it. In ForgeRock DS, you would use ACIs (global or not) to define who has permission to use the control. Out of the box, only Directory Manager can, because it has the bypass access controls privilege. Check the “Use Control or Extended Operation” section of the Administration Guide for the details on how to allow a user to use a control.

Explaining index-entry-limit in ForgeRock Directory Services / OpenDJ

A few years ago, I’ve explained the various resource limits in OpenDJ, the open source LDAP and REST directory server. A few months ago, someone read the post and asked on twitter about the index-entry-limit:

Screen Shot 2016-08-20 at 16.28.01

The index-entry-limit is probably the least understood parameter in the OpenDJ directory server, as was the AllIDThreshold in Sun Directory Server (and its siblings : Netscape Directory, Red Hat Directory, Oracle DSEE…). So before I dive in explaining what is this parameter, how it’s used and how it can be tuned, let me start with answering the question : how does index-entry-limit relate to other administrative limits ?

Answer: It doesn’t ! The index-entry-limit is an internal limit and does not really limits the results returned to clients. It just limits the resources consumed when processing indexes.

A Directory Server is a very specialized data-store based on the LDAP standard, and its primary goal was to be able to search and return user information such as email addresses or names and phone numbers, very quickly and for a large number of different clients. For that, the directory servers were designed to favor reads over writes, and read optimization was achieved through the use of indexes.

In LDAP, a search request (which can be used to read an entry or search for one or more through the whole database) contains a search filter. The filter may be simple or complex, and composed of one or more attribute value assertions.

A simple filter can be “(sn=Smith)”. Complex filters combine operators and different attributes : “(&(objectclass=Person)(|(sn=Smith)(cn=*Smith*)))” – find a person whose surname is smith or whose common name contains smith

When the ForgeRock Directory Server / OpenDJ receives a search request, it processes it in 2 phases. In the first phase, it analyzes the search filter, to identify which attributes are indexed, and then uses these indexes to build a list of possible candidates to return. If there are no indexed attributes or the list is too large, the server decides that the list is actually the whole database. Such search request is tagged as “unindexed” and the server verify if the authenticated user has the “unindexed-search” privilege before continuing. In the second phase, it reads all the candidates from the database, and assess the full filter to decide to return the entry to the client or not (subject to access controls).

ForgeRock DS / OpenDJ implements attribute indexes as reversed index. Meaning that for a specific attribute, we keep a pair of each unique value and a list of the entries that  contain that value. Because maintaining a large list of entries for each value of all indexed attributes may have a big cost, both in term of memory usage and disk I/O (think that when you add an entry in the Directory, all of its indexed attributes will need to be updated), we introduced a limit to the number of entries that an index record can contain: the index-entry-limit. For example, if the number of entries that contain the objectClass person exceeds the limit, then we mark the key as “full” and we consider that the list of candidates is actually the whole set of directory entries.  This saves us from updating and reading a very long record, allocating lots of data, to end up iterating through almost all entries. You might ask, so why having an index for objectClass then ? Well, in a directory server that contains millions of users, there are in fact very few entries that are not persons. These entries will have their objectClass values indexed, and searching for those entries will be very efficient thanks for the index.

The index-entry-limit is a limit of the number of entries that are contained in a single index record, per value of an attribute index. Its default value is 4000 and works for most medium to large scale deployments. So, why is it a configurable parameter, and when should you change it?

Because ForgeRock DS is used in many different environments with various use cases, and a great range of number of entries (some of our customers have over 100 millions entries in a directory service), we know that one size doesn’t fit all. But the default value works for most of the index usages. Also, the index-entry-limit can be set for each individual index, or for the whole backend (and this value applies to all indexes that don’t have a specific value). It is highly recommended that you only try to change the index-entry-limit of specific indexes, and not the backend default value.

In no case, should you increase the index-entry-limit to a value close to the total number of entries in the directory. This will undermine performances of both searches and updates, significantly increase the footprint of the data stored on disk.

There are few known cases where the index-entry-limit value should be changed (and equally cases where increasing the value will only consume more resources for no performance gain). Keep also in mind that when you change the index-entry-limit, you need to rebuild the indexes for which the limit was changed. So it’s not something that you want to do too often. And definitely not something that you need to adjust constantly.

Groups. When the server starts, it issues an internal search to find all group entries and cache them for better performances. The search is based on the ObjectClass attribute. If there are more than 4000 groups of one kind (the search is for GroupOfNames, GroupOfUniqueNames, GroupOfEntries, DynamicGroup and ds-virtual-static-group), the search will be unindexed and can take a long time to proceed. In that case, you should increase the index-entry-limit for the ObjectClass attribute, to a value just above the number of groups.

Members (or uniqueMembers). If you have more than 4000 static groups, and you know that some users are likely to be member of more than 4000 groups, then you should also increase the index-entry-limit for the member attribute (or uniqueMember) to a value just above the maximum number of group a user can be in, especially if you have enabled the Referential Integrity Plugin (that removes a user from groups when its entry is deleted).

Another typical use case for increasing the index-entry-limit is when you have millions of entries, and an attribute doesn’t have a flat distribution of values. Think about the surname of users. In a wide range of population, there are probably more “Smith” or “Lee” than “Washington”. Within 10M users, would there be more than 4000 “Lee”? If it’s possible, and the server receives searches with filters such as “(sn=Lee)”, then you should consider increasing the limit for the sn attribute.

Backendstat is the tool you want to use to verify the state of the index and whether some records have reached the index-entry-limit. For some attributes, such as ObjectClass, it is normal that the limit is reached. For others, such as sn, it’s probably something you want to check regularly.

The backendstat tool requires exclusive access to the database, and thus can only run against a server that is stopped (or a backup).

To list the indexes, use backendstat list-indexes:

$ backendstat list-indexes -b dc=example,dc=com -n userRoot

Index Name Raw DB Name Type Record Count
dn2id /dc=com,dc=example/dn2id DN2ID 10002
id2entry /dc=com,dc=example/id2entry ID2Entry 10002
referral /dc=com,dc=example/referral DN2URI 0
id2childrencount /dc=com,dc=example/id2childrencount ID2ChildrenCount 3
state /dc=com,dc=example/state State 18
uniqueMember.uniqueMemberMatch /dc=com,dc=example/uniqueMember.uniqueMemberMatch MatchingRuleIndex 0
mail.caseIgnoreIA5SubstringsMatch:6 /dc=com,dc=example/mail.caseIgnoreIA5SubstringsMatch:6 MatchingRuleIndex 31232
mail.caseIgnoreIA5Match /dc=com,dc=example/mail.caseIgnoreIA5Match MatchingRuleIndex 10000
aci.presence /dc=com,dc=example/aci.presence MatchingRuleIndex 0
member.distinguishedNameMatch /dc=com,dc=example/member.distinguishedNameMatch MatchingRuleIndex 0
givenName.caseIgnoreMatch /dc=com,dc=example/givenName.caseIgnoreMatch MatchingRuleIndex 8605
givenName.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/givenName.caseIgnoreSubstringsMatch:6 MatchingRuleIndex 19629
telephoneNumber.telephoneNumberSubstringsMatch:6 /dc=com,dc=example/telephoneNumber.telephoneNumberSubstringsMatch:6 MatchingRuleIndex 73235
telephoneNumber.telephoneNumberMatch /dc=com,dc=example/telephoneNumber.telephoneNumberMatch MatchingRuleIndex 10000
ds-sync-hist.changeSequenceNumberOrderingMatch /dc=com,dc=example/ds-sync-hist.changeSequenceNumberOrderingMatch MatchingRuleIndex 0
ds-sync-conflict.distinguishedNameMatch /dc=com,dc=example/ds-sync-conflict.distinguishedNameMatch MatchingRuleIndex 0
entryUUID.uuidMatch /dc=com,dc=example/entryUUID.uuidMatch MatchingRuleIndex 10002
sn.caseIgnoreMatch /dc=com,dc=example/sn.caseIgnoreMatch MatchingRuleIndex 10000
sn.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/sn.caseIgnoreSubstringsMatch:6 MatchingRuleIndex 32217
cn.caseIgnoreMatch /dc=com,dc=example/cn.caseIgnoreMatch MatchingRuleIndex 10000
cn.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/cn.caseIgnoreSubstringsMatch:6 MatchingRuleIndex 86040
objectClass.objectIdentifierMatch /dc=com,dc=example/objectClass.objectIdentifierMatch MatchingRuleIndex 6
uid.caseIgnoreMatch /dc=com,dc=example/uid.caseIgnoreMatch MatchingRuleIndex 10000

Total: 23

To check the status of the indexes and see which keys are full (i.e. exceeded the index-entry-limit), use backendstat show-index-status. Warning, this may take a long time.

$ backendstat show-index-status -b dc=example,dc=com -n userRoot
Index Name Raw DB Name Valid Confidential Record Count Over Entry Limit 95% 90% 85%
uniqueMember.uniqueMemberMatch /dc=com,dc=example/uniqueMember.uniqueMemberMatch true false 0 0 0 0 0
mail.caseIgnoreIA5SubstringsMatch:6 /dc=com,dc=example/mail.caseIgnoreIA5SubstringsMatch:6 true false 31232 12 0 0 0
mail.caseIgnoreIA5Match /dc=com,dc=example/mail.caseIgnoreIA5Match true false 10000 0 0 0 0
aci.presence /dc=com,dc=example/aci.presence true false 0 0 0 0 0
member.distinguishedNameMatch /dc=com,dc=example/member.distinguishedNameMatch true false 0 0 0 0 0
givenName.caseIgnoreMatch /dc=com,dc=example/givenName.caseIgnoreMatch true false 8605 0 0 0 0
givenName.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/givenName.caseIgnoreSubstringsMatch:6 true false 19629 0 0 0 0
telephoneNumber.telephoneNumberSubstringsMatch:6 /dc=com,dc=example/telephoneNumber.telephoneNumberSubstringsMatch:6 true false 73235 0 0 0 0
telephoneNumber.telephoneNumberMatch /dc=com,dc=example/telephoneNumber.telephoneNumberMatch true false 10000 0 0 0 0
ds-sync-hist.changeSequenceNumberOrderingMatch /dc=com,dc=example/ds-sync-hist.changeSequenceNumberOrderingMatch true false 0 0 0 0 0
ds-sync-conflict.distinguishedNameMatch /dc=com,dc=example/ds-sync-conflict.distinguishedNameMatch true false 0 0 0 0 0
entryUUID.uuidMatch /dc=com,dc=example/entryUUID.uuidMatch true false 10002 0 0 0 0
sn.caseIgnoreMatch /dc=com,dc=example/sn.caseIgnoreMatch true false 10000 0 0 0 0
sn.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/sn.caseIgnoreSubstringsMatch:6 true false 32217 0 0 0 0
cn.caseIgnoreMatch /dc=com,dc=example/cn.caseIgnoreMatch true false 10000 0 0 0 0
cn.caseIgnoreSubstringsMatch:6 /dc=com,dc=example/cn.caseIgnoreSubstringsMatch:6 true false 86040 0 0 0 0
objectClass.objectIdentifierMatch /dc=com,dc=example/objectClass.objectIdentifierMatch true false 6 4 0 0 0
uid.caseIgnoreMatch /dc=com,dc=example/uid.caseIgnoreMatch true false 10000 0 0 0 0
Total: 18
Index: /dc=com,dc=example/mail.caseIgnoreIA5SubstringsMatch:6
Over index-entry-limit keys: [.com] [@examp] [ample.] [com] [e.com] [exampl] [le.com] [m] [mple.c] [om] [ple.co] [xample]
Index: /dc=com,dc=example/objectClass.objectIdentifierMatch
Over index-entry-limit keys: [inetorgperson] [organizationalperson] [person] [top]

I hope this long article will help you better understand and tune your ForgeRock Directory Servers for search performances. Please let me know how it goes.

Better index troubleshooting with ForgeRock DS / OpenDJ

Many years ago, I wrote about troubleshooting indexes and search performances, explaining the magicdebugSearchIndex” operational attribute, that allows an administrator to get from the server information about the processing of indexes for a specific search query.

The returned value provides insights on the indexes that were used for a particular search, how they were used and how the resulting set of candidates was built, allowing an administrator to understand whether indexes are used optimally or need to be tailored better for specific search queries and filters, in combination with access logs and other tools such as backendstat.

In DS 6.5, we’ve made some improvements in the search filter processing and we’ve changed the format of the debugSearchIndex value to provide a better reporting of how indexes are used.

The new format is now JSON based, which allow to give it more structure and all could be processed programatically. Here are a few examples of output of the new debugSearchIndex attribute values.

$ bin/ldapsearch -h localhost -p 1389 -D "cn=directory manager" -b "dc=example,dc=com" "(&(cn=*Den*)(mail=user.19*))" debugsearchindex
Password for user 'cn=directory manager': *********

dn: cn=debugsearch
debugsearchindex: {"filter":{"intersection":[{"index":"mail.caseIgnoreIA5SubstringsMatch:6", "exact":"ser.19","candidates":111,"retained":111},{"index":"mail.caseIgnoreIA5SubstringsMatch:6", "exact":"user.1","candidates":1111,"retained":111},
{"filter":"(cn=*Den*)", "index":"cn.caseIgnoreSubstringsMatch:6",
"range":"[den,deo[","candidates":103,"retained":5}], "candidates":5},"final":5}

Let’s look at the debugSearchIndex value and interpret it:

{
"filter": {
"intersection": [
{
"index": "mail.caseIgnoreIA5SubstringsMatch:6",
"exact": "ser.19",
"candidates": 111,
"retained": 111
},
{
"index": "mail.caseIgnoreIA5SubstringsMatch:6",
"exact": "user.1",
"candidates": 1111,
"retained": 111
},
{
"filter": "(cn=*Den*)",
"index": "cn.caseIgnoreSubstringsMatch:6",
"range": "[den,deo[",
"candidates": 103,
"retained": 5
}
],
"candidates": 5
},
"final": 5
}

The filter had 2 components: (cn=*Den*) and (mail=user.19*). Because the whole filter is an AND, the result set is an intersection of several index lookups. Also, both substring filters, but one is a substring of 3 characters and the second one a substring of 7 characters. By default, substring indexes are built with substrings of 6 characters. So the filters are treated differently. The server optimises the processing of indexes so that it will try to first to use the queries that are the most effective. In the case above, the filter (mail=user.19*) is preferred. 2 records are read from the index, and that results in a list of 111 candidates. Then, the server use the remaining filter to narrow the result list. Because the string Den is shorter than the indexed substrings, the server scans a range of keys in the index, starting from the first key match “den” and stopping before the key that matches “deo”. This results in 103 candidates, but only 5 are retained because they were parts of the previous result set. So the result is 5 entries that are matching these filters.

Note the [den,deo[ notation is similar to mathematical Set representation where [ and ] indicate whether a set includes or excludes the boundaries.

Let’s take an example with an OR filter:

$ bin/ldapsearch -h localhost -p 1389 -D "cn=directory manager" -b "dc=example,dc=com" "(|(cn=*Denice*)(uid=user.19))" debugsearchindex
Password for user 'cn=directory manager': *********

dn: cn=debugsearch
debugsearchindex: {"filter":{"union":[{"filter":"(cn=*Denice*)", "index":"cn.caseIgnoreSubstringsMatch:6","exact":"denice","candidates":1}, {"filter":"(uid=user.19)", "index":"uid.caseIgnoreMatch","exact":"user.19","candidates":1}],"candidates":2},"final":2}

As you can see, the result is now a union of 2 exact match (i.e. reads of index keys), each resulting a 1 candidate.

Finally here’s another example, where the scope is used to attempt to reduce the candidate list:

$ bin/ldapsearch -h localhost -p 1389 -D "cn=directory manager" -b "ou=people,dc=example,dc=com" -s one "(mail=user.1)" debugsearchindex
Password for user 'cn=directory manager': *********

dn: cn=debugsearch
debugsearchindex: {"filter":{"filter":"(mail=user.1)","index":"mail.caseIgnoreIA5SubstringsMatch:6", "exact":"user.1","candidates":1111},"scope":{"type":"one","candidates":"[NOT-INDEXED]","retained":1111},"final":1111}

You can find more information and details about the debugsearchindex attribute in the ForgeRock Directory Services 6.5 Administration Guide.

ForgeRock Directory Services 6.5 is Available

The ForgeRock Identity Platform was released and publicly announced early December this year (also here).

As you may guess from the announcement, an important part of the new features has to do with DevOps, running in Docker, automated with Kubernetes.

The underlying datastore for the ForgeRock Identity Platform is ForgeRock Directory Services, and the new 6.5 release comes with a set of new features and improvements, that are detailed in the Release Notes, but here’s some highlights:

Ease of use has always been important for us, and DS 6.5 brings it to a new level for the customers that are deploying other ForgeRock products. Starting with this version, you can now select, at the time of installation, one or more profiles. A profile contains the complete configuration for a specific use, from base DN, backend, indexes, schema, specific configuration parameters, administrative users, ACI and privileges.. Out of the box, we are delivering 3 profiles for ForgeRock Access Management: Identity Store, Configuration Store and the Core Token Service Store; 1 profile for ForgeRock Identity Management: Managed Object Store; and 1 profile for Directory Services evaluation, that contains the data and configuration that is used through our documentation, and allows you to copy and paste the command examples of the guides and replay them against a running server.

To learn more about profiles, get DS 6.5, and run

setup –help-profiles

. To learn about a specific profile, you can run

setup –help-profile am-cts:6.5.0

With regards to DevOps, containers and automation in the cloud, we’ve continued the efforts that we had started with previous releases.

  • DS 6.5 now supports a method to run post upgrade tasks to the data, such as rebuilding indexes.
  • The server has 2 new HTTP endpoints to poke about its status. /isReady indicates that the server is up and running. /isHealty indicates if its current state is optimal, or if there are some temporary limitations, such as a database backend is offline for maintenance, or the replication is lagging too much (with too much being fully configurable).
  • The Grafana sample dashboard has been updated
  • Like all ForgeRock Identity Platform’s products, DS comes with a Common Audit handler that published log messages to stdout, a common practice when working with Docker containers.

Directory Proxy Server 6.5 now supports “sharding”, i.e. distributing data into multiple discrete replicated directory services. Such deployments make very large amount of data easier to manage and give better write scalability. In this version, the number of “shards” is fixed, but we are working on making the service dynamically scaling as the data grows, in future versions.

Directory Services 6.5 now supports limiting the number of connections that can be opened from a single client application. By IP address, a client may be denied, fully allowed or restricted in its number of opened connections, offering a greater protection against misbehaving applications.

The product also now supports the LDAP Relax Rules Control, that allow an administrator to add or modify attributes that are normally read-only. This feature can be used when having to synchronise data between different LDAP products, so they have the same timestamps for their creation or modification dates.

We’ve made the “cn=Changelog” suffix and data available on servers that are only acting as Replication hubs (RS), since they are persisting all the changes to replicate them.

We’ve added a couple of troubleshooting tools with the release. One tool, changelogstat) allows to list and dump the content of the replication changelog databases. The supportextract tool allows an administrator to capture the state and logs of a Directory Services instance and make the file available to ForgeRock support quickly.

Java 11 is now fully supported, both Oracle JVM and OpenJDK builds (from Oracle, Red-Hat or Azul Systems).

Finally, like with all releases of Directory Services, we have enhanced the performance and the reliability of the server in many areas. But most importantly, we have fully tested that you can upgrade to 6.5 without any service interruption: from 2.6 to 6.0, you can upgrade an instance and let it replicate with the other instances, then start upgrading the next one, until all instances are on the latest and greatest version. If you use VMs or containers, you can stop an existing instance and replace it with a new one. Or add a new one and then stop an old one… Your choice, but both scenarios are supported.

For further details, read the complete Release Notes. I’m looking forward to your feedback on the features and improvements of the Directory Services 6.5 release!

DDOS Attacks leveraging LDAP !

21382575392_223304551e_z
photo by Christiaan Colen

Yesterday, DDoS mitigation provider Corero Network Security disclosed a zero-day distributed denial of service attack (DDoS) technique, observed in the wild, that is capable of amplifying malicious traffic by a factor of as much as 55x. Several sites published the story as “Attackers are now abusing exposed LDAP servers to amplify DDoS attacks”.

 

According to Corero, the attacks exploited the Lightweight Directory Access Protocol (LDAP), but reading the details of the press release, it appears that the attackers were using Connectionless LDAP services (CLDAP) .

In this case, the attacker sends a simple query to a vulnerable reflector supporting the Connectionless LDAP service (CLDAP) and using address spoofing makes it appear to originate from the intended victim. The CLDAP service responds to the spoofed address, sending unwanted network traffic to the attacker’s intended target.

Connectionless LDAP  is a very old technical specification, published in 1995 as RFC 1798.  In 2003, this specification was obsoleted by RFC 3352 and moved to historical status. One of the main reason for obsoleting the proposed standard was its insufficient security capabilities.

OpenDJ, the open source LDAP Directory Services in Java, has never supported CLDAP and thus cannot be used in such attack. So, if you are a  ForgeRock customer, you should not worry about this kind of attack. But if you’re running a legacy product, that has CLDAP enabled by default, it is probably time to think about moving to a more recent and up to date directory service, such as OpenDJ.

 

Managing OpenDJ with REST

OpenDJ, the open source LDAP Directory Server, was the first to propose a native HTTP REST / JSON access to the data.

In the next major release, OpenDJ will be providing many enhancements to the REST interface, that I will describe in a series of posts. To start with, let’s talk about the new administrative interfaces added to manage the OpenDJ server.

When the HTTP access is enabled, OpenDJ creates by default 2 administrative endpoints: /admin/config and /admin/monitor.

/admin/config provides a read-write access to the configuration, with the same view and hierarchy of objects as the LDAP access. All of the operations that are possible with the dsconfig command, can be done over LDAP, and now REST.  As a matter of fact, the /admin/config API is automatically generated from the same XML description files that are used to generate the LDAP view and the dsconfig command line utilities. This means that any extension, plugin added to the server will also be exposed via REST without additional code.

screen-shot-2016-10-25-at-15-03-54

Above is an example of query of the /admin/config endpoint, querying for all  backends , done as a user who has the privilege to read the configuration. A similar query done with a user that doesn’t have the config-read privilege does fail as below:

$ curl -s -u user.2 http://localhost:8080/admin/config/backends/userRoot
Enter host password for user 'user.2': 
{
 "message" : "Insufficient Access Rights: You do not
have sufficient privileges to perform search operations
in the Directory Server configuration",
 "code" : 403,
 "reason" : "Forbidden"
}

/admin/monitor provides a read-only view on all of the OpenDJ monitoring information that was already accessible via LDAP under the "cn=Monitor" naming context, and JMX.

$ curl -s -u user.0 http://localhost:8080/admin/monitor/
Enter host password for user 'user.0':
{
 "_id" : "monitor",
 "upTime" : "0 days 2 hours 49 minutes 54 seconds",
 "currentConnections" : "1",
 "totalConnections" : "32",
 "currentTime" : "20161024103215Z",
 "startTime" : "20161024074220Z",
 "productName" : "OpenDJ Server",
 "_rev" : "00000000644a67b2",
 "maxConnections" : "3"
}

The /admin REST endpoints can be protected with different authorization mechanisms, from HTTP basic to OAuth2. And the whole endpoint can be disabled as well if needed using dsconfig.

These administrative REST endpoints can be tested with the OpenDJ nightly builds. They are also available to ForgeRock customers as part of our latest update of the ForgeRock Identity Platform.

More about OpenDJ support for JSON attribute values

In a previous post, I introduced the new JSON syntax, JSON query and matching rules that are delivered as part of the OpenDJ LDAP directory server. Today, I will give more insights on how to customise the syntax, tune the matching rules for smarter and more efficient indexing, and I will highlight some best practices with using the JSON syntax.

JSON Syntax Validation

When defining an attribute with a JSON syntax, the server will validate that the JSON value is compliant with JSON RFC.  OpenDJ offers a few options to relax some of the constraints of a valid JSON. To change the settings of the syntax, you must use dsconfig --advanced.

>>>> Configure the properties of the Core Schema

Property Value(s)
 ----------------------------------------------------------------------
 1) allow-attribute-types-with-no-sup-or-syntax true
 2) allow-zero-length-values-directory-string false
 3) disabled-matching-rule NONE
 4) disabled-syntax NONE
 5) enabled true
 6) java-class org.opends.server.schema.CoreSchemaProvider
 7) json-validation-policy strict
 8) strict-format-certificates true
 9) strict-format-country-string true
 10) strict-format-jpeg-photos false
 11) strict-format-telephone-numbers false
 12) strip-syntax-min-upper-bound-attribute-type-description false

?) help
 f) finish - apply any changes to the Core Schema
 c) cancel
 q) quit

Enter choice [f]: 7


>>>> Configuring the "json-validation-policy" property

Specifies the policy that will be used when validating JSON syntax values.

Do you want to modify the "json-validation-policy" property?

1) Keep the default value: strict
 2) Change it to the value: disabled
 3) Change it to the value: lenient

?) help
 q) quit

Enter choice [1]:

Strict is the default mode.

Disabled means that the server will not try to validate the content of a JSON value.

Lenient means that it will validate the JSON value, but tolerate comments, single quotes and unquoted control characters.

JSON Matching Rule and Indexing

Like any attribute in the OpenDJ server, attributes with a JSON syntax can be indexed.

$ dsconfig -h localhost -p 4444 \
  -D "cn=Directory Manager" -w secret12 -X -n \
 set-backend-index-prop \--backend-name userRoot \
 --index-name json --set index-type:equality

By default, the server actually indexes each field of all JSON values. If the values are large and complex, indexing will  result in many disk I/O, possibly impacting performances for write operations.

If you know which fields of the JSON values will be queried for by the client applications, you can optimise the index and specify the JSON fields that are indexed. This is by creating a new custom schema provider for the JSON query. You can choose to overwrite the default JSON query matching rules (as illustrated below), and this will affect all JSON attributes, or you can choose to create a new rule (with a new name and OID).

In the example below, the custom schema provider overwrites the default caseIgnoreJsonQueryMatch, and only indexes the JSON fields _id and name with its subfields.

$ dsconfig -h localhost -p 4444 \
  -D "cn=Directory Manager" -w secret12 -X -n \
 create-schema-provider --provider-name "Json Schema" \
 --type json-schema --set enabled:true \
 --set case-sensitive-strings:false \
 --set ignore-white-space:true \
 --set matching-rule-name:caseIgnoreJsonQueryMatch \
 --set matching-rule-oid:1.3.6.1.4.1.36733.2.1.4.1 \
 --set indexed-field:_id \
 --set "indexed-field:name/**" 

When you overwrite the default matching rule, or you define a new one, you need to rebuild the indexes for all attributes that are making use of it.

Best Practices

The support for JSON attributes in OpenDJ is very new, but yet, we can recommend how to best use them.

The first thing, is to use the JSON syntax for attributes that are single valued. Indexing is designed to associate values with entries. Because JSON query indexes are built for all fields of the JSON objects, an entry will be returned if a query matches all fields, even though they are in different objects.

The JSON syntax is handy to store complex JSON objects in a single attribute and query them, through any field. However, the larger the values, the  more impact on the directory server’s performances. As, by default, all JSON fields are indexed, the more fields, the more expensive will be indexing. Also, because the JSON objects are LDAP attributes, the only way to change a value is to replace the value with a new one (or delete the value and add a new one, which are operations with even more bytes). There is no patch operation on the value. Finally, OpenDJ stores all attributes of an entry in a single database record. So any change in the entry itself will require to write the whole entry again.

As we’ve seen above, OpenDJ proposes a way to customise the JSON queries and the JSON fields that are indexed. We suggest that you make use of this capability and optimise the indexing of JSON objects for the queries run by the client applications.

If you plan to store different kinds of JSON objects in an OpenDJ directory service, define different attributes with the JSON syntax, and use a custom JSON query per attribute. For example, lets assume you will have entries that are persons with an address attribute with a JSON syntax, and some other entries that represent OAuth2 tokens, and the token main attribute has a JSON syntax. You should define an address attribute and a token attribute, both with the JSON syntax, but their specific matching rules, like below.

attributeTypes: ( 1.3.6.1.4.1.36733.2.1.1.999 NAME 'address'
  SYNTAX 1.3.6.1.4.1.36733.2.1.3.1
  EQUALITY caseIgnoreJsonAddressQueryMatch SINGLE-VALUE )

attributeTypes: ( 1.3.6.1.4.1.36733.2.1.1.999 NAME 'token'
  SYNTAX 1.3.6.1.4.1.36733.2.1.3.1 
  EQUALITY caseIgnoreJsonTokenQueryMatch SINGLE-VALUE )

where the matching rules are defined as such:

$ dsconfig -h localhost -p 4444 \
  -D "cn=Directory Manager" -w secret12 -X -n \
 create-schema-provider \
 --provider-name "Address Json Schema" \
 --type json-schema --set enabled:true \
 --set case-sensitive-strings:false \
 --set ignore-white-space:true \
 --set matching-rule-name:caseIgnoreJsonAddressQueryMatch \
 --set matching-rule-oid:1.3.6.1.4.1.36733.2.1.4.998

and

$ dsconfig -h localhost -p 4444 \
  -D "cn=Directory Manager" -w secret12 -X -n \
 create-schema-provider \
 --provider-name "Token Json Schema" \
 --type json-schema --set enabled:true \
 --set case-sensitive-strings:false \
 --set ignore-white-space:true \
 --set matching-rule-name:caseIgnoreJsonTokenQueryMatch \
 --set matching-rule-oid:1.3.6.1.4.1.36733.2.1.4.999 \
 --set indexed-field:token_type \
 --set indexed-field:expires_at \
 --set indexed-field:access_token

Note that there is an issue with OpenDJ 4.0.0-SNAPSHOTS (nightly builds) and when you define a new Schema Provider, you need to restart the server to have it be effective.

Storing JSON objects in LDAP attributes…

jsonUntil recently, the only way to store a JSON object to an LDAP directory server, was to store it as string (either a Directory String i.e a sequence of UTF-8 characters, or an Octet String i.e. a blob of octets).

But now, in OpenDJ, the Open source LDAP Directory services in Java, there is now support for new syntaxes : one for JSON objects and one for JSON Query. Associated with the JSON query, a couple of matching rules, that can be easily customised and extended, have been defined.

To use the syntax and matching rules, you should first extend the LDAP schema with one or more new attributes, and use these attributes in object classes. For example :

dn: cn=schema
objectClass: top
objectClass: ldapSubentry
objectClass: subschema
attributeTypes: ( 1.3.6.1.4.1.36733.2.1.1.999 NAME 'json'
SYNTAX 1.3.6.1.4.1.36733.2.1.3.1 EQUALITY caseIgnoreJsonQueryMatch SINGLE-VALUE )
objectClasses: (1.3.6.1.4.1.36733.2.1.2.999 NAME 'jsonObject'
SUP top MUST (cn $ json ) )

Just copy the LDIF above into config/schema/95-json.ldif, and restart the OpenDJ server. Make sure you use your own OIDs when defining schema elements. The ones above are samples and should not be used in production.

Then, you can add entries in the OpenDJ directory server like this:

$ ldapmodify -a -D cn=directory\ manager -w secret12 -h localhost -p 1389

dn: cn=bjensen,ou=people,dc=example,dc=com
objectClass: top
objectClass: jsonObject
cn: bjensen
json: { "_id":"bjensen", "_rev":"123", "name": { "first": "Babs", "surname": "Jensen" }, "age": 25, "roles": [ "sales", "admin" ] }

dn: cn=scarter,ou=people,dc=example,dc=com
objectClass: top
objectClass: jsonObject
cn: scarter
json: { "_id":"scarter", "_rev":"456", "name": { "first": "Sam", "surname": "Carter" }, "age": 48, "roles": [ "manager", "eng" ] }

The very nice thing about the JSON syntax and matching rules, is that OpenDJ understands how the values of the json attribute are structured, and it becomes possible to make specific queries, using the JSON Query syntax.

Let’s search for all jsonObjects that have a json value with a specific _id :

$ ldapsearch -D cn=directory\ manager -w secret12 -h localhost -p 1389 -b "dc=example,dc=com" -s sub "(json=_id eq 'scarter')"

dn: cn=scarter,ou=people,dc=example,dc=com
objectClass: top
objectClass: jsonObject
json: { "_id":"scarter", "_rev":"456", "name": { "first": "Sam", "surname": "Carter" }, "age": 48, "roles": [ "manager", "eng" ] }
cn: scarter

We can run more complex queries, still using the JSON Query Syntax:

$ ldapsearch -D cn=directory\ manager -w secret12 -h localhost -p 1389 -b "dc=example,dc=com" -s sub "(json=name/first sw 'b' and age lt 30)"

dn: cn=bjensen,ou=people,dc=example,dc=com
objectClass: top
objectClass: jsonObject
json: { "_id":"bjensen", "_rev":"123", "name": { "first": "Babs", "surname": "Jensen" }, "age": 25, "roles": [ "sales", "admin" ] }
cn: bjensen

For a complete description of the query  filter expressions, please refer to ForgeRock Common  REST (CREST) Query Filter documentation.

The JSON matching rule supports indexing which can be enabled using dsconfig against the appropriate attribute index. By default all JSON fields of the attribute are indexed.

In a followup post, I will give more advanced configuration of the JSON Syntax, detail how to customise the matching rule to index only specific JSON fields, and will outline some best practices with the JSON syntax and attributes.

OpenDJ: Monitoring Unindexed Searches…

FR_plogo_org_FC_openDJ-300x86OpenDJ, the open source LDAP directory services, makes use of indexes to optimise search queries. When a search query doesn’t match any index, the server will cursor through the whole database to return the entries, if any, that match the search filter. These unindexed queries can require a lot of resources : I/Os, CPU… In order to reduce the resource consumption, OpenDJ rejects unindexed queries by default, except for the Root DNs (i.e. for cn=Directory Manager).

In previous articles, I’ve talked about privileges for administratives accounts, and also about Analyzing Search Filters and Indexes.

Today, I’m going to show you how to monitor for unindexed searches by keeping a dedicated log file, using the traditional access logger and filtering criteria.

First, we’re going to create a new access logger, named “Searches” that will write its messages under “logs/search”.

dsconfig -D cn=directory\ manager -w secret12 -h localhost -p 4444 -n -X \
    create-log-publisher \
    --set enabled:true \
    --set log-file:logs/search \
    --set filtering-policy:inclusive \
    --set log-format:combined \
    --type file-based-access \
    --publisher-name Searches

Then we’re defining a Filtering Criteria, that will restrict what is being logged in that file: Let’s log only “search” operations, that are marked as “unindexed” and take more than “5000” milliseconds.

dsconfig -D cn=directory\ manager -w secret12 -h localhost -p 4444 -n -X \
    create-access-log-filtering-criteria \
    --publisher-name Searches \
    --set log-record-type:search \
    --set search-response-is-indexed:false \
    --set response-etime-greater-than:5000 \
    --type generic \
    --criteria-name Expensive\ Searches

Voila! Now, whenever a search request is unindexed and take more than 5 seconds, the server will log the request to logs/search (in a single line) as below :

$ tail logs/search
[12/Sep/2016:14:25:31 +0200] SEARCH conn=10 op=1 msgID=2 base="dc=example,
dc=com" scope=sub filter="(objectclass=*)" attrs="+,*" result=0 nentries=
10003 unindexed etime=6542

This file can be monitored and used to trigger alerts to administrators, or simply used to collect and analyse the filters that result into unindexed requests, in order to better tune the OpenDJ indexes.

Note that sometimes, it is a good option to leave some requests unindexed (the cost of indexing them outweighs the benefits of the index). If these requests are unfrequent, run by specific administrators for reporting reasons, and if the results are expecting to contain a lot of entries. If so, a best practice is to have a dedicated replica for administration and run these expensive requests. Also, it is better if the client applications are tuned to expect these requests to take a long time.

Data Confidentiality with OpenDJ LDAP Directory Services

FR_plogo_org_FC_openDJ-300x86Directory Servers have been used and continue to be used to store and retrieve identity information, including some data that is sensitive and should be protected. OpenDJ LDAP Directory Services, like many directory servers, has an extensive set of features to protect the data, from securing network connections and communications, authenticating users, to access controls and privileges… However, in the last few years, the way LDAP directory services have been deployed and managed has changed significantly, as they are moving to the “Cloud”. Already many of ForgeRock customers are deploying OpenDJ servers on Amazon or MS Azure, and the requirements for data confidentiality are increasing, especially as the file system and disk management are no longer under their control. For that reason, we’ve recently introduced a new feature in OpenDJ, giving the ability to administrators to encrypt all or part of the directory data before writing to disk.clouddataprotection

The OpenDJ Data Confidentiality feature can be enabled on a per database backend basis to encrypt LDAP entries before being stored to disk. Optionally, indexes can also be protected, individually. An administrator may chose to protect all indexes, or only a few of them, those that contain data that should remain confidential, like cn (common name), sn (surname)… Additionally, the confidentiality of the replication logs can be enabled, and then it’s enabled for all changes of all database backends. Note that if data confidentiality is enabled on an equality index, this index can no longer be used for ordering, and thus for initial substring nor sorted requests.

Example of command to enable data confidentiality for the userRoot backend:

dsconfig set-backend-prop \
 -h opendj.example.com -p 4444 \
 -D "cn=Directory Manager" -w secret12 -n -X \
 --backend-name userRoot --set confidentiality-enabled:true

Data confidentiality is a dynamic feature, and can be enabled, disabled without stopping the server. When enabling on a backend, only the updated or created entries will be encrypted. If there is existing data that need confidentiality, it is better to export and reimport the data. With indexes data confidentiality, the behaviour is different. When changing the data confidentiality on an index, you must rebuild the index before it can be used with search requests.

Key Management - Photo adapted from https://www.flickr.com/people/ecossystems/

When enabling data confidentiality, you can select the cipher algorithm and the key length, and again this can be per database backend. The encryption key itself is generated on the server itself and securely distributed to all replicated servers through the replication of the Admin Backend (“cn=admin data”), and thus it’s never exposed to any administrator. Should a key get compromised, we provide a way to mark it so and generate a new key. Also, a backup of an encrypted database backend can be restored on any server with the same configuration, as long as the server still has its configuration and its Admin backend intact. Restoring such backend backup to fresh new server requires that it’s configured for replication first.

The Data Confidentiality feature can be tested with the OpenDJ nightly builds. It is also available to ForgeRock customers as part of our latest update of the ForgeRock Identity Platform.

Migrating from SunDSEE to OpenDJ

Sun DSEE 7.0 DVDAs the legacy Sun product has reached its end of life, many companies are looking at migrating from Sun Directory Server Enterprise Edition [SunDSEE] to ForgeRock Directory Services, built on the OpenDJ project.

Several of our existing customers have already done this migration, whether in house or with the help of partners. Some even did the migration in 2 weeks. In every case, the migration was smooth and easy. Regularly, I’m asked if we have a detailed migration guide and if we can recommend tools to keep the 2 services running side by side, synchronized, until all apps are moved to the ForgeRock Directory Services deployment.

My colleague Wajih, long time directory expert, has just published an article on ForgeRock.org wikis that described in details how to do DSEE to OpenDJ system to system synchronization using ForgeRock Identity Management product.

If you are planning a migration, check it out. It is that simple !

 

Updates:

  • Update on June 8th to add link to A Global Bank case study.

OpenDJ LDAP Directory Services update

FR_plogo_org_FC_openDJ-300x86The new version of ForgeRock Directory Services, based on OpenDJ 3.0, was released in January and I’ve already written about the new features here, here and here.

We’ve now started the development of the next releases. We’ve updated the high level roadmap on our wiki, to give you an idea of what’s coming.

The last few weeks have been very active, as you can see on our JIRA dashboard.

Screen Shot 2016-03-18 at 10.56.12

There are already a few new features and enhancements in the master branch of our GIT repository :

A Bcrypt password storage scheme. The new scheme is meant to help migration of user accounts from other systems, without requiring a password reset. Bcrypt also provide a much stronger level of security for hashing passwords, as it’s number of iteration is configurable. But since OpenDJ 2.6, we are already providing a PBKDF2 password storage scheme which is recommended over Bcrypt by OWASP, for securing passwords.

Some enhancements of our performance testing tools, part of the OpenDJ LDAP Toolkit. All xxxxrate tools have a new way of computing statistics, providing more reliable and consistent results while reducing the overhead of producing them.

Some performance enhancements in various areas, including replication, group management, overall requests processing…

If you want to see it by yourself, you can checkout the code from our GIT repository, and build it, or you can grab the latest nightly build.

Play with OpenDJ and let us know how it works for you.

What’s new in OpenDJ 3.0, Part III

FR_plogo_org_FC_openDJ-300x86In the previous posts, I talked about the new PDB Backend in OpenDJ 3.0, and the other changes with backends, replication and the changelog.

In this last article about OpenDJ 3.0, I’m presenting the most important new features and enhancements in this major release:

Certificate Matching Rules.

OpenDJ now implements the CertificateExactMatch matching rule in compliance with “Lightweight Directory Access Protocol (LDAP) Schema Definitions for X.509 Certificates” (RFC 4523) and implements the schema and the syntax for certificates, certificate lists  and certificate pairs.

It’s now possible to search a directory to find an entry with a specific certificate, using a filter such as below:

(userCertificate={ serialNumber 13233831500277100508, issuer rdnSequence:"CN=Babs Jensen,OU=Product Development,L=Cupertino,C=US" })

Password Storage Schemes

The PKCS5S2 Password Storage Scheme has been added to the list of supported storage schemes. While this one is less secure and flexible than PBKDF2, it allows some of our customers to migrate from systems that use the PKCS5S2 algorithm. Other password storage schemes have been enhanced to support arbitrary salt length and thus helping with other migrations (without requiring all users to have a new password).

Disk Space Monitoring.

In previous releases, each backend had a disk space monitoring function, regardless of the filesystems or disks used. In OpenDJ 3.0, we’ve created a disk space monitoring service, and backends, replication, log services register to it. This allows the server to optimise its resource consumption to monitor, as well as ensuring that all disks that contain writable data are monitored, and alerts raised when reaching some low threshold.

Improvements

There are many improvements in many areas of the server: in the REST to LDAP services and gateway, optimisations on indexes, dsconfig batch mode, DSML Gateway supporting SOAP 1.2, native packages… For the complete details, please read the Release Notes.

As always, the best way to really see and feel the difference is by downloading and installing the OpenDJ server, and playing with it. We’re providing a Zip installation, an RPM and a Debian Package, the DSML Gateway and the REST to LDAP Gateway as war files.

Over the course of the development of OpenDJ 3.0, we’ve received many contributions, in form of code, issues raised in our JIRA, documentation
 We address our deepest thanks to all the contributors and developers :

Andrea Stani, Auke Schrijnen, Ayami Tyndal, Brad Tumy, Bruno Lavit, Bernhard Thalmayr, Carole Forel, Chris Clifton, Chris Drake, Chris Ridd, Christian Ohr, Christophe Sovant, Cyril Grosjean, Darin Perusich, David Goldsmith, Dennis Demarco, Edan Idzerda, Emidio Stani, Fabio Pistolesi, GaĂ©tan Boismal, Gary Williams, Gene Hirayama, Hakon SteinĂž, Ian Packer, Jaak Pruulmann-Vengerfeldt, James Phillpotts, Jeff Blaine, Jean-NoĂ«l Rouvignac, Jens Elkner, Jonathan Thomas, Kevin Fahy, Lana Frost, Lee Trujillo, Li Run, Ludovic Poitou, Manuel Gaupp, Mark Craig, Mark De Reeper, Markus Schulz, Matthew Swift, Matt Miller, Muzzol Oliba, Nicolas Capponi, Nicolas Labrot, Ondrej Fuchsik, Patrick Diligent, Peter Major, Quentin Cassel, Richard Kolb, Robert Wapshott, SĂ©bastien Bertholet, Shariq Faruqi, Stein Myrseth, Sunil Raju, Tomasz Jędrzejewski, Travis Papp, Tsoi Hong, Violette Roche-MontanĂ©, Wajih Ahmed, Warren Strange, Yannick Lecaillez. (I’m sorry if I missed anyone…)