Feeds:
Posts
Comments

Archive for the ‘Featured Feature’ Category

Previously in TrialCloud and DiscoveryCloud, there were two distinct mechanisms to import documents.  First as a single file or document and second as a batch containing multiple files and an optional load file. With our last release and the inclusion of S3 Folders these methods have been combined to share a common interface called “Import Files” while maintaining and building upon the original functionality.  Here’s how it works.

After selecting the file(s) to import on the Import Files page, there is an option to process as a container file or as a native file.

Importing Container Files

Choose “Container Files” when uploading a single file or folder that is being used only as a means of organizing and uploading the files it contains.

Container Import Results

Information from load file will be applied

Only the contents of the file or folder will be processed, indexed, and included in the case or review, not the container file itself. If a container file or folder is found to contain a Nextpoint load file, that load file will be applied as part of the import.

Importing Native Files

Native File ImportChoose “Native Files” if you are uploading multiple files or folders, or if uploading a single zip file or single folder that is itself evidence. All files, including load files, found as part of processing the selected items will be processed as native files and the information in the load files will not be applied.  So be careful to use the Container Files option if you are looking to utilize a load file.

Native Results

Native files will be processed as if they are Evidence

These improvements, along with the addition of S3 Folders, have streamlined the Import process and provided more flexibility in importing files to your Nextpoint TrialCloud or DiscoveryCloud repositories.  As always, we welcome your comments and feedback.

Read Full Post »

S3 Folders is an alternative to browser-based data import, allowing you to utilize a variety of  client-based uploading tools to transmit data to Discovery Cloud and Trial Cloud.  File size limitations are effectively removed, allowing you to upload large files (i.e. pst mailboxes) without the hassles associated with splitting them up.

Following upload to Amazon S3, files may be selected for import via the batch creation screen’s file-picker:

Load file formats traditionally supported in browser uploads continue to be supported via both browser upload and Case Folder selection.  Additionally, Case Folders supports the selection of loose files or directories containing loose files, making the upload process that much simpler.  Uploaded directory structures containing load files enjoy the extra benefit of easy correction and drop-in replacement of load files when issues are realized and remedied.

It’s an exciting development that we hope you’ll get a lot of mileage out of.

Read Full Post »

Our Discovery Cloud interface was designed and tuned to keep user focus on the primary goal: Review.  Everything the interface does is intended to help users determine and set Responsive and Privileged status.  That does need to be the primary focus, however we’ve accumulated enough data about how folks are using Discovery Cloud to draw some conclusions and make some improvements.

Many “reviewers” are tasked not only with setting Responsive/Privileged codes, but also adding some coding information… since they’re in the neighborhood.  That has been a bit cumbersome as “Preview” (an image of the record/document) and “Coding” have been tabs on the left side of the page, allowing to view 1-or-the-other, but never both:

We’ve shifted that over to the right side of the page, allowing users to view the Preview, Review/Privileged status options, and Coding data all at once:

The change will reduce the number of clicks and focus changes necessary to get the job done and get on to the next document in the stack.

Read Full Post »

Knowing who has viewed and changed a record is something we’ve long stored in a format unfriendly to human eyes, intended to be available for chain-of-custody auditing.  While we’ll continue to keep our machine-friendly copy as gospel, we’ve built in some new functionality to provide a much friendlier front end for everyday use.

You can now find a what has changed and who has changed it on any given record via the “Views & Edits” tab in Discovery Cloud and Trial Cloud.  Cloud Preservation will offer similar functionality via it’s “Views” tab.

screenshot from Trial Cloud

Read Full Post »

Document redaction has come to the iPad, bringing another powerful feature to your mobile toolset.

The functionality mimics that of it’s desktop-browser counterpart and is available to all standard & advanced level users via dropdown while viewing the document:

It’s another great tool in our iPad toolbox!

Read Full Post »

The volume of data involved in the e-discovery process is imposing.  Removing exact duplicates prior to review is the most powerful and sensible way to eliminate obviously redundant data.  This reduces not only the overall data volume, but also ensures data only needs to be reviewed once.

In the latest enhancement to the Nextpoint product lineup, including Cloud Preservation, Discovery Cloud, and Trial Cloud, automatic “deduplication” is built in — allowing users to eliminate duplicative data simply and easily via a user friendly interface.  This is a huge efficiency improvement, reducing the time and cost of e-discovery and we are excited to bring this innovation to our customers.

The 2 phased Deduplication process.

1. Preventing duplicate uploads.

As an initial step, uploaded files (zip, pst, loose file, etc) are compared to all previous uploads.  If the upload is an exact duplicate of a previous upload, the application will request confirmation you would like the duplicative files to be loaded.  Frequently, this step alone can prevent large numbers of duplicate documents.

2. Preventing duplicate documents/files, contained in different archives.

Often times the same file (an email, document, etc) has been collected from multiple sources.  When this occurs, the upload will slip by Phase 1 because the container (zip/pst/etc) was physically different than anything previously uploaded.  This is by far the most common cause of duplicate documents.

Individual checks occur on files contained inside of the high-level container to search for an exact match*.  When an exact match is caught, introduction of the duplicative data is prevented, instead linking to the pre-existing copy of the document.  The pre-existing document will now indicate that it was loaded both in Batch 1 and Batch 2.  Additionally fields such as location URI and custodian will be merged.

* Metadata from a load file and/or changes made via the web application after the previous load completed (designating, reviewing, changing shortcuts, etc) will be taken into consideration when making the “exact match” determination.  By default, a file hash is employed to identify candidates for “exact matches” – optionally, this can be expanded to include documents that may have a different file hash but share an Email-Message-ID.

Understanding what has been deduplicated.

The batch details screen (choose “Import/Export” from the menu bar in Discovery Cloud or “More” -> “Imports” in Trial Cloud) has been enhanced to provide information on how much and what has been deduplicated.

The main section provides verbose lists of Actions, Errors, and Skipped Rows that occurred during processing.  It also provides abbreviated lists of new and duplicate documents encountered during processing, with links to quickly view the full set(s) via our normal search/filter interface.

The Batch Summary section (located in the sidebar) has been enhanced to provide information about the uploaded file, the results and status of processing, links to resulting documents, and the ability to reprocess the originally uploaded file completely.

Handling the duplicates that make it through.

The “same file” can make it into the system through a few different channels.
  1. A user deliberately disabled deduplication during an upload or selected the “reprocess file” option on an entire batch.
  2. The same document was attached to multiple emails.  The meaning of a document can vary wildly based on it’s context, thus we consider an email and it’s attachment(s) as a single unit during the deduplication process.
  3. The “same file” by content hash will be allowed into the system if the associated meta is different.  This could happen due to differences specified in load files, changes made to meta in the system following uploads, etc.
In all of these situations, we display related files in the sidebar (of Discovery Cloud) to allow the human reviewer to determine whether associated meta does or does not warrant further deduplication.

Customization & Disabling Deduplication

Deduplication can be disabled on any individual upload/reprocessing request.  It can be disabled at the instance level via “Settings” => “General Settings” => “Deduplication”.  The settings section also provides the ability to use Email-Message-ID in the “exact match” determination.

When will it be available?

This functionality is available immediately in both Discovery and Trial Clouds.
As always, custom support options are available on request to address unique deduplication needs.  We’re excited for these new improvement and the positive impact they’ll have for our customers going forward.

Read Full Post »

Today we’re announcing the launch of the CloudPreservation Public API. It’s designed to make it even easier for you to get web-accessible data into CloudPreservation. What’s particularly great about this API is how easy it makes taking fine-grained control over your web preservations, either through a handy browser-based bookmarklet tool, or by having your own development team programmatically add webpages to your feeds. Let’s take a look at two ways to use this new feature.

Keeping a website feed up to date

Say you run a website that provides lots of content and has lots of updates — naturally, you have a CloudPreservation instance pointing at it to track all the changes. To keep tabs on everything that’s happening on your site, you’ve probably got the crawl frequency of that CP instance cranked up as fast as it goes too.

Unfortunately, that’s pretty inefficient. Not all of your pages have necessarily changed over the course of a week, so hunting through them all for the changes just takes time that doesn’t necessarily need to be spent. And, if you make an important change on Monday, it won’t be preserved until the next crawl — which might not be until the following Sunday.

The CloudPreservation Public API makes it really easy to get these “between crawl” changes into your feed as they happen. Simply install the bookmarklet for your website feed by dragging it to your bookmarks bar (or right clicking and choosing “Add Favorite” if you use Internet Explorer).

Then on every page you’d like to add or update, just click the bookmarklet and we’ll go fetch the newest version of it.

It really is that simple.

If you have a small development team available to you, you could even go one step further and integrate your content management system with our API. This would let you preserve copies of new or updated pages in CloudPreservation as they’re published or edited — nearly in real-time.

Storing only the pages you want to store

This new ability to fetch and preserve single pages actually lends itself to having a new type of feed as well, which we’re calling a Public API/Bookmarklet Feed. This feed only takes in the pages you tell it to specifically through the API or the bookmarklet.

Let’s say your company just launched an amazing new feature that’s being covered by all the major news outlets. The world is buzzing about your product and you want to preserve what they’re saying. Simply set up a Public API/Bookmarklet Feed — in this case we’ll call it “Launch Buzz” — and install its bookmarklet. Then, browse to any of the articles that you want to preserve and click the bookmarklet. CloudPreservation will see your request, and preserve a copy of that page in the “Launch Buzz” feed.

Public API/Bookmarklet feeds are fantastic for preserving this sort of research, as well as any other time you want to keep track of a collection of very specific web pages without crawling and storing their entire website. Collecting and preserving single webpages has never been easier.

More information

Bookmarklets are available today for all webpage and Public API/Bookmarklet feeds, look for them on your feeds listing page as well as instructions to get you started.

For the more programmatically inclined, the public API — and its associated documentation — is also available for use starting today. The documentation contains examples on how to send us webpages in various programming languages as well as instructions on how to move beyond those examples to build your own custom solutions.

We think the API is going to be a great tool in your preservation arsenal. As always, we love hearing your feedback. Feel free to get in touch with us if you have any comments or questions.

Read Full Post »

You’ve uploaded all of your docs into Discovery Cloud and, after some searching for agreed upon keywords, have broken up the large universe of documents into several smaller sets (“subreviews”), but… these “smaller” sets still number in the high thousands for docs contained!  You need to go further – you need to break these sets again so each of your reviewers has a manageable set of documents that they can reasonably attack.

The “Split into subreviews” option on the landing page of your Review provides just that.

With a click, you’ll be on the road to breaking up that large Subreview into several smaller pieces.  You can break it up into as many pieces as you’d like by simply providing their names.  You may choose to name them things like “Environment-Bob” and “Environment-Sarah”, but you can get as original/specific as you like.

You control what happens to the original set of documents (“Environment” in this example).  Keep it around to maintain a rolled-up view of what’s going on in the component subreviews, or remove it to reduce clutter.  You also control what (if any) additional documents should be pulled into the set.  For example, you could pull emails related to your documents into the overall document set, to ensure that they’re included.

Related documents will be placed together (and sequentially) into created subreviews to provide continuity for the reader.  For example, you won’t have to worry about an email landing in a different subreview than it’s attachments.  This may lead to slightly uneven document counts in subreviews, but only in extreme circumstances will the overall document counts be wildly different.

After/while you’re documents are being split up, you can visit the “Settings” section to assign the subreviews to the specific reviewers.  If the reviewer is in the Nextpoint “Reviewer” role-type, they will only have visibility to subreviews they are assigned.  If the reviewer is of a different role-type (“Advanced” or “Standard”), assignment will provide some clarify as to who is working on what.

The ability to easily breakup and assign large subreviews will provide clarity and visibility to the higher level task, helping you to get the job done not only faster, but better.

Read Full Post »

Ever wonder how much power has been dedicated to your current import?  Along with some recent infrastructure upgrades, we’ve brought that information right up front where you can see it, giving you visibility to the elastic ramp-up of dedicated servers and processors working on your requests.  (Elastic ramp-up is a core capability of any legitimate cloud computing solution — Here’s Amazon’s case study of Nextpoint’s implementation.)

Available now via:

  • in Discovery/Review Cloud: “Imports/Exports”
  • in Trial (Prep) Cloud: “More” -> “My Downloads” -> “Imports”

Each processing request begins by breaking the work up into smaller “Jobs”.  There’s a lot of logic put into just how that breakup occurs, but the gist of it is: bigger requests = more jobs = more dedicated processors to get your work done.

We’re happy to be pushing this information to the front and hope that you like it too!

Read Full Post »

Our electronic Exhibit Stamping has always been geared towards improving the tedious act of stamping hundreds or even thousands of docs.  In fact, we’ve even told you about the code we use to do it, but what about the smaller day-to-day jobs?

We’re introducing some changes to our Exhibit Stamping interface that will have you covered for big and small batches of docs alike.

Search for the doc(s) that you want to stamp.

Select “stamp” for the appropriate designation of the first (only?) doc that you’d like to hit — You’ll need to “Add” the designation first, if you haven’t already.

When you arrive in the stamping interface, the initial state will reflect the document within the scope of the current stamping request.  i.e. If you’re stamping as “Defense” and the doc already has a “Defense” stamp: you will be greeted by the document with it’s stamp in the current position: adjust if you need to.  If the doc has not previously been stamped (or has, but not as “Defense”) – the stamper will not be shown by default.

Place the stamp as you see fit and close the stamping interface or move to the next document in the set.  If you uncheck the stamping checkbox no stamp will be applied (if previously stamped: it will be removed).

It’s a much cleaner and faster interface and we’re really happy with how it’s turned out.  Hope you like it!

Read Full Post »

« Newer Posts - Older Posts »