Better handling of duplicate DNs for bulkloader : LUSENET : OiD Enhancement Requests : One Thread

At present if the bulkloader encounters a duplicate DN in the LDIF file the entire bulk load is aborted and the DNs of the duplicate entries are written into duplicateDN.log. It would be far better if the user had the option to specify whether the load should be aborted completely if a duplicate DN is encountered or whether the duplicates should be simply filtered out into a bad file for subsequent correction and reloading by the user (similar to the way that SQL*Loader works) without aborting the rest of the load. The user could also be given the option to specify the number of duplicates encountered before the load is aborted entirely. I have been attempting to load an LDIF file containing 60000+ entries and it is extremely slow and cumbersome to edit out the duplicates using vi or another editor on a file this size by searching on the DNs listed in the duplicateDN.log file.

-- Anonymous, March 04, 1999

Moderation questions? read the FAQ