The attached rule was updated in October 2025 to include IdentityIQ 8.5. As a best practice, always use the latest rule as indicated for the version of IdentityIQ.
The latest update and recommended version of the "Correct IDX" rule is found in this article and will be periodically updated as needed. It is supported by all IIQ releases, currently versions up to 8.5. This rule should not be part of a maintenance routine, but as a one-time fix when recommended by SailPoint Support.
The attached "Correct IDX" rule resolves issues with IdentityIQ including NullPointerException error, related to null indexes. Exceptions and errors are visible in IdentityIQ logs, and can occur at various execution points including identity refresh, aggregation, perform maintenance task execution, or during other process execution. Review the full stack trace of the exception to ensure that this rule will provide benefit to correcting the issue. If you are unsure about executing this rule, please reach out to Support by opening a ticket and include the logs containing the error stack trace, along with a description of when you are seeing the NullPointerException.
This rule should not be confused with the IIQ Integrity Scanner (IdentityIQ IDX Integrity Scanner), which should be considered only for IdentityIQ versions prior to 7.3 only. The approach described in the IdentityIQ IDX Integrity Scanner document has been superseded via the plugin, Support Data Collector Plugin.
IDX values are created and updated by Hibernate, a framework leveraged by IdentityIQ. IDX values are used in part to iterate over a list of objects more quickly, in an operation such as such as finding all work items in a certification, aggregation of accounts on identities, refreshing links on identities, etc. The IDX value is a hibernate data-persistence layer pointer that provides the application with the location of the object within a list.
The most common cause for incorrect IDX values or NullPointerException errors is usually incorrect updates using custom code, for example, failing to lock during an update to an object, or when objects are deleted from IdentityIQ without taking proper steps to remove related linked objects. Other causes are one-offs, for example caused by network re-transmissions of packets or data due to collisions, packets arriving out of order, temporary loss of communication within a network, the database cache temporarily not accepting updates, a driver issue at the data layer, or for other reasons.
Always test this rule in a lower environment prior to using in production and ensure that you have appropriate backups in place to handle any recovery in the occurrence of any data issues. Keep in mind this rule is provided as a tool to assist customers, but you are responsible for maintaining the rule and ensuring its proper use. Please follow the directions below to perform this work:
Rule:SupportRuleIDXCK
TaskDefinition:Support IDXCK Rule
TaskDefinition:Support Task for Hibernate IDX
<server name> processed nn out of nn mysql table(s) for v.r release
Please note that this task will only repair NULL IDX issues and cannot fix issues such as null parent foreign key issues, or similar items that may show up as warnings. These types of issues are not as critical as correcting the NULL IDX issues.
If you have any questions or comments regarding this rule, please open a ticket with SailPoint Support.
Same boat as @aaron_burgemeister . If the IDX is not needed, we should have it removed.
Also no IIQ implementation can be without code and mind you the code is whatever APIs are provided by Sailpoint. In our case no custom DB updates/creates but regular LCM, rules, workflows etc. We even went to far to look for closing, de caching, locking/unlocking objects as recommended by Sailpoint. Still we have these issues.
Latest case in point, we launched our quarterly certification, the launch failed with the whole campaign going to error state for no reason. We launched it again and this time it launched. The campaign which went dud the first time now has all the IDX as null in one of the cert archive entity item table, so we cannot delete the old cert campaign unless we run this IDX again.
Going to the question of engaging support, imagine trying to replicate this behavior when you have over 5K manager globally in every country certifying more then 2.5mil line items & staging running across 6 servers (on a very large infra)! What would loggers do & who can stop everything in production while trying to reproduce issue? The staging takes hours itself with complicated exclusion (not inclusion) rules and you still need to have the system processing LCM events for JML as BAU across every timezone!
Have already posted on Sailpoint Ideas, but if the product needs to scale up to have LCM, JML, Certs of every kind (transfers, LM, entitlement Owners, Targetted) etc, then the tables needs to be much more normalized and the whole idea of CLOB and IDX seem to be preventive to that. Just my 2 cents.
@bbagaria Is that the actual scale of your deployment? We have what I thought was a large deployment, but you mentioned global, and we are certainly not global.
@teresa_warren @Eric_Mendes_CISSP 's Would it be appropriate to use this rule to check and clean up a non-production environment so that it can be monitored for future issues? Could the rule be used in "read only" mode to routinely check and alert for any IDX issues?
If this rule is suitable for use in a production environment, and it correctly identifies and repairs IDX issues, why would it be bad to set it up to run as routine maintenance? I have some ideas, but I'd like your insights as well. The reasons that come to my mind are:
These all seem like good reasons to only run it as directed, but at the same time I value keeping things clean and running smoothly.
This rule is only meant to be used in situations where a null IDX is suspected, and this tool is not meant to be run as a regular scheduled task. This could be suspected in the case where certain NullPointerException errors are presented, or for issues identified by SailPoint Support.
The reason we do not recommend using this as a regular task or monitoring is that this Rule is updated and tested with each new release of IdentityIQ, and then is updated on this post. Once a rule is scheduled as a task in an environment, this is kind of an "out of sight, out of mind" issue, where someone may not think to update the rule that is scheduled until they see a failure.
Another reason is what you mentioned, where you may be fixing an issue without identifying the root cause of the problem, so you get into a cycle of repair, instead of fixing the code/deployment causing issue.
Is there a way to get notified every time SailPoint makes and update to IDX rule?
please add support for AH scheme specified via jndi. Currently, the rule fails since no username password specified in iiq.props for AH scheme
It's been a while since I've checked this out, yet the problems with idx management in Sailpoint's IdentityIQ continue, so here I am again, updating to be sure I have the latest version. Thought I've used this fix for IdentityiQ's data issues for years, I decided to read the full article and while there is some potentially-good information, there is so much obviously bad information I'm not sure how much of the other that I can trust. Specifically:
"The most common cause for incorrect IDX values or NullPointerException errors is usually incorrect updates using custom code, for example, failing to lock during an update to an object, or when objects are deleted from IdentityIQ without taking proper steps to remove related linked objects. Other causes are one-offs, for example caused by network re-transmissions of packets or data due to collisions, packets arriving out of order, temporary loss of communication within a network, the database cache temporarily not accepting updates, a driver issue at the data layer, or for other reasons."
The parts that make me question the whole document include the blatantly invalid finger-pointing at "network re-transmissions of packets or data due to collisions" and "packets arriving out of order". The "temporary loss of communication within a network" also seems highly suspicious. The reason these are suspicious is probably obvious to anybody with a basic networking background, but in summary the idea of network re-transmissions (implying TCP is in use, which is a given anyway) causing a database data issue doesn't hold any water. Packets arriving out of order (also implying TCP) has the same lack of logic; packets out of order will never impact a database's data unless there are serious underlying coding issues, probably in the operating system, since IdentityIQ doesn't deal with packet management directly (that's what Java, and the operating system, and network drivers, are for). The reason these do not hold water is that TCP was designed, a few decades ago, specifically to handle packet ordering; if it did not work reliably, there would be fundamental problems with communication across the whole Internet. Even in that case, though, TCP's purpose is to fix packet ordering issues (it is possible, even expected, for packets to sometimes arrive out of order) but it doesn't pass the data up the stack to the application layer (HTTP in this IdentityIQ's case, unless they mean JDBC) until the packets are fully assembled in the correct order; again if this did not work reliably, the Internet would be a mess. Sailpoint using this as a possible reason for idx issues is farcical.
I also requested that examples of what "custom code ... failing to lock during an update to an object" or "objects ... deleted from IdentityIQ without taking proper steps to remove related linked objects" would look like in code, since it is possible that something in this area causes the idx column issues, but again the answer was "no". Since so many of us are seeing idx issues, and a common factor we all share is probably not eachother's custom code, I think it is far more likely that non-custom code (e.g. Sailpoint's IdentityIQ code) is to blame, but it could also be that we all make the same mistake, and examples (as I requested) could help us all avoid causing these issues in our custom code.
Finally, the recommendations to "not be part of a maintenance routine" is, I believe, wrong depending on what "maintenance" means to the writer vs. the reader (sadly, not defined). It is far more wasteful to spend tons of time looking for a non-existent NullPointerException in custom code, especially one out of the blue when nothing has changed in months or years, than to have this rule do what it was meant to do, finding IdentityIQ's idx column issues proactively; however, I suspect the guidance around maintenance is probably to NOT run it in "fix" mode, rather than scan mode, and I agree that running in fix mode regularly is probably a bad idea. We have saved tons of time by running in scan mode daily and getting an e-mail if/when it fails. Your mileage may vary, but running in scanning mode should be safe and it will be better than spending hours/days looking for a bug in your code when the problem is actually in the database.
I tried to get Support to take this feedback, but they said "no" as usual, and it was suggested I use the Ideas portal, so double-fail there since fixing a company-published article with an "enhancement request" doesn't show much concern in the quality of articles. Instead, since this is the place the problem exists, I'm posting here, but this will only help customers like me if they read the comments, or if the authors read and act upon it.
While the article contains disturbingly-wrong claims, which call into question the possibly-correct claims, the idx fixer functionality definitely can help by identifying bad data in the database (whether caused by our custom code or IdentityiQ's own code).