Feeds:
Posts
Comments

Archive for the ‘Discovery’ Category

Knowing who has viewed and changed a record is something we’ve long stored in a format unfriendly to human eyes, intended to be available for chain-of-custody auditing.  While we’ll continue to keep our machine-friendly copy as gospel, we’ve built in some new functionality to provide a much friendlier front end for everyday use.

You can now find a what has changed and who has changed it on any given record via the “Views & Edits” tab in Discovery Cloud and Trial Cloud.  Cloud Preservation will offer similar functionality via it’s “Views” tab.

screenshot from Trial Cloud

Read Full Post »

Document redaction has come to the iPad, bringing another powerful feature to your mobile toolset.

The functionality mimics that of it’s desktop-browser counterpart and is available to all standard & advanced level users via dropdown while viewing the document:

It’s another great tool in our iPad toolbox!

Read Full Post »

The volume of data involved in the e-discovery process is imposing.  Removing exact duplicates prior to review is the most powerful and sensible way to eliminate obviously redundant data.  This reduces not only the overall data volume, but also ensures data only needs to be reviewed once.

In the latest enhancement to the Nextpoint product lineup, including Cloud Preservation, Discovery Cloud, and Trial Cloud, automatic “deduplication” is built in — allowing users to eliminate duplicative data simply and easily via a user friendly interface.  This is a huge efficiency improvement, reducing the time and cost of e-discovery and we are excited to bring this innovation to our customers.

The 2 phased Deduplication process.

1. Preventing duplicate uploads.

As an initial step, uploaded files (zip, pst, loose file, etc) are compared to all previous uploads.  If the upload is an exact duplicate of a previous upload, the application will request confirmation you would like the duplicative files to be loaded.  Frequently, this step alone can prevent large numbers of duplicate documents.

2. Preventing duplicate documents/files, contained in different archives.

Often times the same file (an email, document, etc) has been collected from multiple sources.  When this occurs, the upload will slip by Phase 1 because the container (zip/pst/etc) was physically different than anything previously uploaded.  This is by far the most common cause of duplicate documents.

Individual checks occur on files contained inside of the high-level container to search for an exact match*.  When an exact match is caught, introduction of the duplicative data is prevented, instead linking to the pre-existing copy of the document.  The pre-existing document will now indicate that it was loaded both in Batch 1 and Batch 2.  Additionally fields such as location URI and custodian will be merged.

* Metadata from a load file and/or changes made via the web application after the previous load completed (designating, reviewing, changing shortcuts, etc) will be taken into consideration when making the “exact match” determination.  By default, a file hash is employed to identify candidates for “exact matches” – optionally, this can be expanded to include documents that may have a different file hash but share an Email-Message-ID.

Understanding what has been deduplicated.

The batch details screen (choose “Import/Export” from the menu bar in Discovery Cloud or “More” -> “Imports” in Trial Cloud) has been enhanced to provide information on how much and what has been deduplicated.

The main section provides verbose lists of Actions, Errors, and Skipped Rows that occurred during processing.  It also provides abbreviated lists of new and duplicate documents encountered during processing, with links to quickly view the full set(s) via our normal search/filter interface.

The Batch Summary section (located in the sidebar) has been enhanced to provide information about the uploaded file, the results and status of processing, links to resulting documents, and the ability to reprocess the originally uploaded file completely.

Handling the duplicates that make it through.

The “same file” can make it into the system through a few different channels.
  1. A user deliberately disabled deduplication during an upload or selected the “reprocess file” option on an entire batch.
  2. The same document was attached to multiple emails.  The meaning of a document can vary wildly based on it’s context, thus we consider an email and it’s attachment(s) as a single unit during the deduplication process.
  3. The “same file” by content hash will be allowed into the system if the associated meta is different.  This could happen due to differences specified in load files, changes made to meta in the system following uploads, etc.
In all of these situations, we display related files in the sidebar (of Discovery Cloud) to allow the human reviewer to determine whether associated meta does or does not warrant further deduplication.

Customization & Disabling Deduplication

Deduplication can be disabled on any individual upload/reprocessing request.  It can be disabled at the instance level via “Settings” => “General Settings” => “Deduplication”.  The settings section also provides the ability to use Email-Message-ID in the “exact match” determination.

When will it be available?

This functionality is available immediately in both Discovery and Trial Clouds.
As always, custom support options are available on request to address unique deduplication needs.  We’re excited for these new improvement and the positive impact they’ll have for our customers going forward.

Read Full Post »

You’ve uploaded all of your docs into Discovery Cloud and, after some searching for agreed upon keywords, have broken up the large universe of documents into several smaller sets (“subreviews”), but… these “smaller” sets still number in the high thousands for docs contained!  You need to go further – you need to break these sets again so each of your reviewers has a manageable set of documents that they can reasonably attack.

The “Split into subreviews” option on the landing page of your Review provides just that.

With a click, you’ll be on the road to breaking up that large Subreview into several smaller pieces.  You can break it up into as many pieces as you’d like by simply providing their names.  You may choose to name them things like “Environment-Bob” and “Environment-Sarah”, but you can get as original/specific as you like.

You control what happens to the original set of documents (“Environment” in this example).  Keep it around to maintain a rolled-up view of what’s going on in the component subreviews, or remove it to reduce clutter.  You also control what (if any) additional documents should be pulled into the set.  For example, you could pull emails related to your documents into the overall document set, to ensure that they’re included.

Related documents will be placed together (and sequentially) into created subreviews to provide continuity for the reader.  For example, you won’t have to worry about an email landing in a different subreview than it’s attachments.  This may lead to slightly uneven document counts in subreviews, but only in extreme circumstances will the overall document counts be wildly different.

After/while you’re documents are being split up, you can visit the “Settings” section to assign the subreviews to the specific reviewers.  If the reviewer is in the Nextpoint “Reviewer” role-type, they will only have visibility to subreviews they are assigned.  If the reviewer is of a different role-type (“Advanced” or “Standard”), assignment will provide some clarify as to who is working on what.

The ability to easily breakup and assign large subreviews will provide clarity and visibility to the higher level task, helping you to get the job done not only faster, but better.

Read Full Post »

Ever wonder how much power has been dedicated to your current import?  Along with some recent infrastructure upgrades, we’ve brought that information right up front where you can see it, giving you visibility to the elastic ramp-up of dedicated servers and processors working on your requests.  (Elastic ramp-up is a core capability of any legitimate cloud computing solution — Here’s Amazon’s case study of Nextpoint’s implementation.)

Available now via:

  • in Discovery/Review Cloud: “Imports/Exports”
  • in Trial (Prep) Cloud: “More” -> “My Downloads” -> “Imports”

Each processing request begins by breaking the work up into smaller “Jobs”.  There’s a lot of logic put into just how that breakup occurs, but the gist of it is: bigger requests = more jobs = more dedicated processors to get your work done.

We’re happy to be pushing this information to the front and hope that you like it too!

Read Full Post »

Our electronic Exhibit Stamping has always been geared towards improving the tedious act of stamping hundreds or even thousands of docs.  In fact, we’ve even told you about the code we use to do it, but what about the smaller day-to-day jobs?

We’re introducing some changes to our Exhibit Stamping interface that will have you covered for big and small batches of docs alike.

Search for the doc(s) that you want to stamp.

Select “stamp” for the appropriate designation of the first (only?) doc that you’d like to hit — You’ll need to “Add” the designation first, if you haven’t already.

When you arrive in the stamping interface, the initial state will reflect the document within the scope of the current stamping request.  i.e. If you’re stamping as “Defense” and the doc already has a “Defense” stamp: you will be greeted by the document with it’s stamp in the current position: adjust if you need to.  If the doc has not previously been stamped (or has, but not as “Defense”) – the stamper will not be shown by default.

Place the stamp as you see fit and close the stamping interface or move to the next document in the set.  If you uncheck the stamping checkbox no stamp will be applied (if previously stamped: it will be removed).

It’s a much cleaner and faster interface and we’re really happy with how it’s turned out.  Hope you like it!

Read Full Post »

A lot of exciting things have been going on in the Nextpoint Lab.  Here’s an abbreviated view of some recent happenings:

  • A recent Amazon (Amazon Web Services) case study looked into how Nextpoint harnesses their cloud offerings; taking e-discovery processing power to the next level.
  • CloudPreservation.com has moved out of beta, including support for robots.txt and site maps.
  • You may have noticed McAfee and eTrust certifications badges on our login page.  They’ll be keeping us honest on a daily basis going forward.
  • We’ve introduced the ability to choose your own naming scheme for bulk PDF exports, allowing you to name the files by designation, title, whatever you like.

  • Transitioning as conditions change has never been easier as we’ve put the power to copy/move your data to different cloud offerings in your hands.
  • It’s easy to make a small mistake in deposition naming and end up with some depositions for “Bob Smith” and some for “Bob A Smith”.  Recent changes make merging a matter of a few clicks, preserving the depositions themselves as well as any relationships with documents/exhibits, videos, etc.

  • Our OCR capabilities have been upgraded, providing more accuracy and reliability pulling search text out of uploaded image files.
  • We are constantly updating as search indexes as new data comes in or changes are made.  Timestamps throughout the app now clearly indicate when the last pass was completed.
  • Time to generate on-the-fly PDFs can vary wildly based on the style of data contained, as well as the number of pages — We’ve put the choice of “wait or notify me when it’s done” in your hands, allowing you to stick around for an immediate download on smaller requests or move on to something else while it’s working on the heavier hitters.

Lots of exciting changes – and, of course, that’s not everything we’ve accomplished in the last couple months.  If you have any questions on these, just drop us a line at support@nextpoint.com

Read Full Post »

« Newer Posts - Older Posts »