Feeds:
Posts
Comments

Archive for the ‘Release Notes’ Category

Several small/medium enhancements have been made available over the last couple of weeks.  This is not a complete list, but we’d like to highlight a few:

Deposition Summary Report

Available in TrialCloud, this feature provides a high level overview of what is currently loaded.  It’s quite handy when conducting a “what are we missing” audit following extensive uploading.
deporeport

“Responsive w/o Issues” Filter

Available in DiscoveryCloud, this handy feature helps you isolate documents that have been flagged as responsive but are potentially missing a reason code.
noissue

Copy Export to S3 Folder

Downloading large files via the browser can be inconvenient.  Moving them to a location where you can utilize a more robust upload/download tool can simplify the process.

snipexport

 Rotate Ranges of Document Images

Rotating pages individually can be a hassle when an entire document came in sideways.  Move several at once by providing a range.

rotaterange

Each of these features is already available and active in your Nextpoint application instance.

Read Full Post »

One feature of our recent Review Metrics release is the ability to keep closer tabs on your review at the Reviewer level.  One (optional) tool in your set is the ability to track time for the purposes of calculating documents per hour reviewed.

A Reviewer may clock themselves in/out with the click of a button on their landing page.

clock in

 

If time was logged incorrectly, spent outside the system, or just plain forgotten: edits and full entries may be modified via the calendar interface (accessed via the “view” link in the screenshot above).

calendar

 

Advanced users may manage their own time entries or do the same for other users:

chgcaluser

 

The focus of Time Keeping is the documents-per-hour metric and does not aim to be a fully featured time management system.  In that name, the system is flexible and does not provide locking, security, and audits that might appear in a true time-tracking tool.  However, reports detailing the work done in a given day or month, by any given Reviewer, are available to advanced users — providing direct timestamps at which documents were reviewed.

tlog

 

We hope you’ll find this change and the wider release that it is part of helpful in gaining visibility into the status of your review.

Read Full Post »

We’re excited to bring a significant refresh to the Discovery Cloud experience.  The updates add clarity to your review at the Reviewer and Subreview levels and provide visibility into where progress is being made or where it may be lacking.  The ability to more clearly differentiate between documents that are not yet reviewed for privilege and those that have been positively identified as “not privileged” facilitates privilege review in a way that was previously inconvenient.

 

Subreview Metrics

Available to Advanced-level users, a tab under “Admin” brings you status and counts at the Subreview level.

substat

Subreview level statistics illustrate progress and may provide insight on what area of your review is producing the highest quantity of relevant/privileged/etc documents.  Many graphs may be clicked to gain an extra level of detail.

 

Reviewer Metrics

Available to Advanced-level users, a tab under “Admin” brings you status and counts at the Reviewer level.

reviewstat

Statistics for each Reviewer provide visibility on what sort of work is being accomplished.  If you elect to utilize the (new) Timekeeper functionality, a time metric is also available to give an indicator of review speed.  As with Subreview Metrics, many graphs may be clicked to gain an extra level of detail.

 

Reporting

Available via Settings -> Metrics Reports, users may opt-in to status emails.  A report will be generated and transmitted weekly, providing the recipient with the overall +/- subreview status of the review.  These emails do not need to be tied to a Nextpoint account, freeing you up to transmit them to addresses of those not necessarily involved in the day-to-day review.

reportsign

 

Independent Privilege and Relevancy Review

Previously known as “Review Status”, “Relevancy Status” is concentrated on the relevant/not portion of a review.  On the Privilege side, this enables differentiation between a document that is “not reviewed for privilege” and a document that has been reviewed for privilege and certified to truly be “not privileged”.

Existing documents with a “Not Privileged” status have been marked as “Not Reviewed” for privilege.  If you would prefer that those documents instead be “Not Privileged” – a simple bulk edit is all that is necessary to make that modification en masse.

 

So, when do I get it?

The update will be available to some users beginning Tuesday 3/19, with the remainder receiving the updates Thursday, 3/21.  As with all updates, no action is necessary on your end.

Read Full Post »

Import status (“Batch” documents upload) in DiscoveryCloud and TrialCloud has been updated to streamline reporting and enhance issue detection and handling.

The “batch list” page has a simplified look, allowing 2x the previous quantity to be conveniently displayed at a time, along with quick visual cues to make statuses obvious at a glance.  The status-bar provides a visual diagnostic of processing results for each batch.  Click on a section of the bar to view the corresponding portion of the processing logs.

 

Marking a batch as “Resolved” will update it’s status and gray out the status bar to make it a little less eye catching.

 

Remembering that “Batch 9” is the zip of files you found on Terry’s PC is a bit of a pain.  Providing a name for the batch gives you a handy moniker to be used throughout the interface.

Batch Status Reporting

Available when your batch has completed:  Download a full report of actions or the specific actions you are interested in (i.e. only the documents/issues that recommend follow-up action to be taken).

 

The link for “Normal” actions only is pictured above – To download only the “Warnings” for example, a similar link may be found on the “Warnings” tab.

The download is a csv listing the actions taken and (where available) links to the related document in the interface, providing you with a convenient starting point for resolving any issues encountered.

 

We’re excited about what these changes immediately bring to the table for Batch-status reporting and error resolution, as well as the future enhancements these underlying changes will enable in the future.

Read Full Post »

Along with numerous back-end architectural enhancements, last night we introduced a set of keyboard shortcuts to improve Reviewer efficiency when moving through a large set of documents.  A cheat-sheet is available via a link in the upper-right corner of the screen while reviewing documents.

 

 

Shortcuts have been made available for the most common coding operations, as well as page & document navigation options.

 

Coding

Alt + Ctrl + R Responsive
Alt + Ctrl + N Non-Responsive
Alt + Ctrl + F Requires Follow-Up
Alt + Ctrl + P Privileged
Alt + Ctrl + C Clear Coding

Persistance

Alt + Ctrl + U Update
Alt + Ctrl + Enter Update & Next

Navigation

Alt + Ctrl + Up Arrow Previous Page
Alt + Ctrl + Down Arrow Next Page
Alt + Ctrl + Left Arrow Previous Document
Alt + Ctrl + Right Arrow Next Document

General

Alt + Ctrl + H Show Help Menu

 

“Alt + Ctrl”?  Why the game of keyboard-Twister?  The catch here is that your browser is also “listening” for the keys you’re pressing and reacting to certain special combinations.  Some browsers like to react more to “Control”, others “Command”, and others “Alt”.  This combination of keys gets us into our own space while using keys that are (on many/most keyboards) located near enough to each other to make the combination practical… with a little practice.

These new options allow you to quickly update a document’s status and move on to the next in the stack.

Read Full Post »

Good news everyone! We’re happy to announce the addition of Tumblr feed archival functionality at Cloudpreservation.com. Cloud Preservation users now have the ability to automatically archive Tumblr blogs.

Tumblr lets you effortlessly share anything. Post text, photos, quotes, links, music, and videos, from your browser, phone, desktop, email, or wherever you happen to be. You can customize everything, from colors, to your theme’s HTML.

Cloud Preservation archives all of Tumblr’s different post types while maintaining each blog’s customization.

Sample Tumblr Post from life.tumblr.com

Sample Tumblr Post from life.tumblr.com

Not only are Tumblr posts stored as they appear to website viewers, but Cloud Preservation also stores multimedia file resources used within posts. Photos from photo posts, videos from video posts and audio from audio posts are all automatically archived. Just as video files from sources like YouTube and Vimeo are viewable within the Cloud Preservation viewer, audio files shared on Tumblr can also be played without leaving Cloudpreservation.com

Audio Player

Cloud Preservation offers two different Tumblr feed archival options: Public and Authenticated. When using the Authenticated option, users archive all posts from every blog they access to as well as a list of followers from each blog. Authenticated feeds also archive basic user profile information. With the public feed option, users can archive all the posts from any public Tumblr blog.

As of November 14th, 2011, Tumblr had 33,318,876 Tumblr blogs and Tumblr users were posting at a rate of 38,000 posts per minute. With so much Tumblr data being shared, we’re glad to offer Cloud Preservation users the ability to fullfil their legal and compliance obligation needs.

Read Full Post »

Today Cloudpreservation.com is happy to announce archival functionality for the social photography site Flickr.com.  Flickr account holders are now able to automatically backup their Flickr photos and videos with Cloudpreservation.

The U.S. Food and Drug Administration's Flickr Profile

The U.S. Food and Drug Administration's Flickr Profile

Cloudpreservation offers two different options for archiving accounts: authenticated and public feeds.  Authenticated feeds archive all of a user’s photos, vidoes, profile information, contacts, comments, favorites and photosets.  When archiving a public feed, Cloudpreservation has access to only the profile information, contacts, favorites, photos and videos that is publicly available.  All of the public user’s photosets will be archived, but private photos within the photosets will not.  Public Flickr feeds do not include a user’s comments.

Example Archived Flickr Photoset

The U.S. Food and Drug Administration Archived Flickr Photoset: Recalled Products

When archiving Flickr photos, Cloudpreservation stores the highest resolution version of the file available as well as the metadata associated with it.  Exif data, tags, timestamp and licensing information are all archived and are easily searchable.

Example Flickr Photo with Data

Example Flickr Photo with Data

Cloudpreservation also stores social data from the Flickr website like comments and favorites.  This allows the documentation of social interaction with the added context of an image or video.

Example Archived Flickr Comments

Example Archived Flickr Comments

Currently, over 5 billion photos are stored at Flickr. It’s used by many companies and government agencies to store and promote their digital media.  We’re glad to be able to provide Flickr users with a way to archive their accounts and fulfill legal and compliance obligations.

Read Full Post »

S3 Folders is an alternative to browser-based data import, allowing you to utilize a variety of  client-based uploading tools to transmit data to Discovery Cloud and Trial Cloud.  File size limitations are effectively removed, allowing you to upload large files (i.e. pst mailboxes) without the hassles associated with splitting them up.

Following upload to Amazon S3, files may be selected for import via the batch creation screen’s file-picker:

Load file formats traditionally supported in browser uploads continue to be supported via both browser upload and Case Folder selection.  Additionally, Case Folders supports the selection of loose files or directories containing loose files, making the upload process that much simpler.  Uploaded directory structures containing load files enjoy the extra benefit of easy correction and drop-in replacement of load files when issues are realized and remedied.

It’s an exciting development that we hope you’ll get a lot of mileage out of.

Read Full Post »

The volume of data involved in the e-discovery process is imposing.  Removing exact duplicates prior to review is the most powerful and sensible way to eliminate obviously redundant data.  This reduces not only the overall data volume, but also ensures data only needs to be reviewed once.

In the latest enhancement to the Nextpoint product lineup, including Cloud Preservation, Discovery Cloud, and Trial Cloud, automatic “deduplication” is built in — allowing users to eliminate duplicative data simply and easily via a user friendly interface.  This is a huge efficiency improvement, reducing the time and cost of e-discovery and we are excited to bring this innovation to our customers.

The 2 phased Deduplication process.

1. Preventing duplicate uploads.

As an initial step, uploaded files (zip, pst, loose file, etc) are compared to all previous uploads.  If the upload is an exact duplicate of a previous upload, the application will request confirmation you would like the duplicative files to be loaded.  Frequently, this step alone can prevent large numbers of duplicate documents.

2. Preventing duplicate documents/files, contained in different archives.

Often times the same file (an email, document, etc) has been collected from multiple sources.  When this occurs, the upload will slip by Phase 1 because the container (zip/pst/etc) was physically different than anything previously uploaded.  This is by far the most common cause of duplicate documents.

Individual checks occur on files contained inside of the high-level container to search for an exact match*.  When an exact match is caught, introduction of the duplicative data is prevented, instead linking to the pre-existing copy of the document.  The pre-existing document will now indicate that it was loaded both in Batch 1 and Batch 2.  Additionally fields such as location URI and custodian will be merged.

* Metadata from a load file and/or changes made via the web application after the previous load completed (designating, reviewing, changing shortcuts, etc) will be taken into consideration when making the “exact match” determination.  By default, a file hash is employed to identify candidates for “exact matches” – optionally, this can be expanded to include documents that may have a different file hash but share an Email-Message-ID.

Understanding what has been deduplicated.

The batch details screen (choose “Import/Export” from the menu bar in Discovery Cloud or “More” -> “Imports” in Trial Cloud) has been enhanced to provide information on how much and what has been deduplicated.

The main section provides verbose lists of Actions, Errors, and Skipped Rows that occurred during processing.  It also provides abbreviated lists of new and duplicate documents encountered during processing, with links to quickly view the full set(s) via our normal search/filter interface.

The Batch Summary section (located in the sidebar) has been enhanced to provide information about the uploaded file, the results and status of processing, links to resulting documents, and the ability to reprocess the originally uploaded file completely.

Handling the duplicates that make it through.

The “same file” can make it into the system through a few different channels.
  1. A user deliberately disabled deduplication during an upload or selected the “reprocess file” option on an entire batch.
  2. The same document was attached to multiple emails.  The meaning of a document can vary wildly based on it’s context, thus we consider an email and it’s attachment(s) as a single unit during the deduplication process.
  3. The “same file” by content hash will be allowed into the system if the associated meta is different.  This could happen due to differences specified in load files, changes made to meta in the system following uploads, etc.
In all of these situations, we display related files in the sidebar (of Discovery Cloud) to allow the human reviewer to determine whether associated meta does or does not warrant further deduplication.

Customization & Disabling Deduplication

Deduplication can be disabled on any individual upload/reprocessing request.  It can be disabled at the instance level via “Settings” => “General Settings” => “Deduplication”.  The settings section also provides the ability to use Email-Message-ID in the “exact match” determination.

When will it be available?

This functionality is available immediately in both Discovery and Trial Clouds.
As always, custom support options are available on request to address unique deduplication needs.  We’re excited for these new improvement and the positive impact they’ll have for our customers going forward.

Read Full Post »

Ever wonder how much power has been dedicated to your current import?  Along with some recent infrastructure upgrades, we’ve brought that information right up front where you can see it, giving you visibility to the elastic ramp-up of dedicated servers and processors working on your requests.  (Elastic ramp-up is a core capability of any legitimate cloud computing solution — Here’s Amazon’s case study of Nextpoint’s implementation.)

Available now via:

  • in Discovery/Review Cloud: “Imports/Exports”
  • in Trial (Prep) Cloud: “More” -> “My Downloads” -> “Imports”

Each processing request begins by breaking the work up into smaller “Jobs”.  There’s a lot of logic put into just how that breakup occurs, but the gist of it is: bigger requests = more jobs = more dedicated processors to get your work done.

We’re happy to be pushing this information to the front and hope that you like it too!

Read Full Post »

Older Posts »