Feeds:
Posts
Comments

Archive for the ‘Tips & Tricks’ Category

At Nextpoint, we understand that it is crucial for eDiscovery and litigation tools to be as fast and cost-effective as possible. To that end, we periodically introduce a major enhancements, like our review metrics designed to help you track and monitor the progress of your review. But we are also continually tweaking our interface in response to customer requests to make sure it works the way they do.

Today, we’re proud to introduce a new and improved Grid View, our streamlined interface for browsing your documents. We have already introduced a number of new features in Gridview making it more customizable. Now we have even more resizing options, including a new feature to save the custom views for individual reviewers.

In the past, a reviewer might resize their grid view to allow them to quickly scan the information most important to them. Now, those changes are automatically saved, so that each reviewer can preserve their custom column widths and other choices as they move through their review. It’s just one small way we’re trying to help reviewers quickly scan and precess information they need for a case.

Gridview

For example, in the Grid View layout above, users can choose to change, resize and manipulate any of the columns, a process which is now smoother than ever. In addition, all of our Nextpoint applications will remember those choices and display the same column widths and layout on every page of search results that reviewer looks at. As always, comments and feedback are highly welcomed and encouraged.  Feel free to email us at  thelab atsymbol nextpoint dot com.

Read Full Post »

Viewing deposition video is an essential part of the Trial Cloud’s Depositions tool.  The process becomes a bit more convenient with the addition of continuous play within a label set.

playlist

In addition to play control for each individual designation, the controls preceding the list of designations will play the entire set handsfree.

This change is available immediately in all Trial Cloud instances.

Read Full Post »

Import status (“Batch” documents upload) in DiscoveryCloud and TrialCloud has been updated to streamline reporting and enhance issue detection and handling.

The “batch list” page has a simplified look, allowing 2x the previous quantity to be conveniently displayed at a time, along with quick visual cues to make statuses obvious at a glance.  The status-bar provides a visual diagnostic of processing results for each batch.  Click on a section of the bar to view the corresponding portion of the processing logs.

 

Marking a batch as “Resolved” will update it’s status and gray out the status bar to make it a little less eye catching.

 

Remembering that “Batch 9” is the zip of files you found on Terry’s PC is a bit of a pain.  Providing a name for the batch gives you a handy moniker to be used throughout the interface.

Batch Status Reporting

Available when your batch has completed:  Download a full report of actions or the specific actions you are interested in (i.e. only the documents/issues that recommend follow-up action to be taken).

 

The link for “Normal” actions only is pictured above – To download only the “Warnings” for example, a similar link may be found on the “Warnings” tab.

The download is a csv listing the actions taken and (where available) links to the related document in the interface, providing you with a convenient starting point for resolving any issues encountered.

 

We’re excited about what these changes immediately bring to the table for Batch-status reporting and error resolution, as well as the future enhancements these underlying changes will enable in the future.

Read Full Post »

The account dashboard is your tool for keeping up to date on how much data you’re storing in your Trial Cloud, Discovery Cloud and Preservation Cloud repositories.  Each product dashboard provides an overview of the data used by each of your repositories as well as a product-wide gigabyte sum.

The numbers shown for each repository are the averages of all the records for the time period you are viewing. We run our storage calculations twice daily – once in the morning and once in the evening. You can view a repository’s daily usage by clicking on the repository name. The daily usage records shown are the maximum of the two storage numbers for that day in gigabytes.

The Cloud Preservation dashboard includes feed counts as well as storage numbers and presents these in the same fashion.

A note on document deletion: We wait a full day after a document has been deleted to fully purge it from the system. This gives us the ability to restore the document quickly if it was incorrectly deleted. This may cause some lag in the reduction of gigabytes used per day, but have no fear the reduction will be recorded.

Managing storage can be a daunting task and we strive to be transparent about the amount of data you are storing in any of our products.

Read Full Post »

A very powerful feature of Cloud Preservation is its ability to collect external links. External links are links to web pages or documents that are outside of the website or social media feed being collected.

In terms of website feeds, Cloud Preservation determines if a link is external by comparing the address of the link to the addresses defined in your feed. In the context of Cloud Preservation social media feeds (such as Twitter or Facebook) an external link is a link that was found in a post from the social media feed.

Cloud Preservation provides you four configurable options for how it will manage external links. These options allow you to tailor your feeds to meet your collection needs and also provide a level of control over your feed’s storage use.

Option 1: Never collect external links

This option allows you to ignore offsite links entirely. When the Cloud Preservation crawler encounters a link that it determines to be external, it will record that link, but will not collect the web page at that link’s address. Since this option leaves these external pages out of your repository completely, these external links have no impact on your feed’s storage use.

When to use this option: There is no requirement to collect external pages, and/or there isn’t enough storage capacity for external pages in the Cloud Preservation repository for the selected plan.

Option 2: Never collect modified versions of external links

With this option selected, Cloud Preservation will look to see if it has ever collected this external link before, by comparing the address to all of the addresses of pages it has collected in the past. If it finds another page in the repository that bears this same address, then Cloud Preservation will simply link the existing page to the currently running crawl. Of all the options to collect external links, this has the lowest impact on storage for the repository.

When to use this option: There is a requirement to collect external pages, however the latest version isn’t important or of consequence. Often times for social media feeds like Twitter, the external page modifications aren’t relevant.  For example, the external link could be an article or blog post with constantly changing advertisements and user comments that aren’t important or relevant for your collection.

Option 3: Collect modified versions of external links for new or modified pages

If Cloud Preservation crawls an internal page that has not changed since the last collection, then it will not attempt to fetch the latest version of any external links. However, if the page has changed since the last collection, or is a page that has not been collected previously, then Cloud Preservation will check for new versions of all external links on that page. This option is slightly less efficient in terms of repository storage, but does offer savings over the final option.

Note: This is the default setting for new Cloud Preservation feeds, as we’ve found it to be the best choice for enhancing your collection with external links while keeping storage use in check.

When to use this option: There is a requirement to collect a “point in time” snapshot of both the internal pages and the external pages.

Option 4: Always collect the latest external link

Finally, this option will always attempt to fetch the latest version of the external link. If the link is found on a new internal page, modified internal page, or unmodified internal page, Cloud Preservation will crawl the external link to see if there is a new version. This option will have the largest impact on storage, as external pages frequently change due to rotating advertisements or images and changed content.

When to use this option: Useful when the latest version of offsite pages must be collected, always, and there is a surplus in storage capacity for the Cloud Preservation plan chosen. This option is also necessary for some advanced crawling techniques, such as using a single internal web page whose purpose is to provide an index of several external links.

The crawling process of Cloud Preservation can get complicated, just like the web, and we hope this sheds a bit of light on the subject of external links.

Read Full Post »

CloudPresrevation.com includes a very powerful search capability, so that you can gather quite a bit of information about your archived websites and social media.

In this post, we’ll walk you through some tips and tricks for using dates and crawl times to isolate documents that appeared in a specific timeframe.

What you should know about document dates in CloudPreservation.com

CloudPreservation requests information from Twitter, LinkedIn, and Facebook via their respective APIs. Because of the structured and predictable nature of these APIs, CloudPreservation.com is able to store these dates in it’s database, as well as it’s search index.

Since web pages don’t provide a date posted in a predictable manner, CloudPreservation cannot determine what date pages are posted on. Therefore, CloudPreservation does not have any data in it’s database or index for the document date.

However, CloudPreservation does crawl web sites at a specified intervals, so you can use these intervals to determine when a page was added, changed or deleted. The accuracy of this method is determined by how frequently your crawl interval is configured.

What this means is that with CloudPreservation you can search by document date for Twitter, LinkedIn or Facebook posts, and you can search web pages by using crawl frequency ranges.

With that, let’s look at some common search scenarios and how you’d execute that using CloudPreservation’s powerful search functionality.

Show me my what my social media feed looked like on a certain date

Often times you’d like to see what your social media feed looks like on a specific date. In this case, what you’d like to tell CloudPreservation.com is: “Show me all posts on or before this date, exclude offsite links, and order by date in reverse-chronological order.”

Using a combination of the date range search condition and a document type condition, CloudPreservation can deliver you this information. So, if you’d like to see what your social media feed looked like on June 22nd, 2011, you could construct your search like so:

document_date:[1970-01-01 2011-06-22] AND NOT document_type:"Web Page"

Once you have your results, you can order by document date in descending order.

Show me all new pages added in the last crawl

Sometimes you just want to see everything that’s new in your feed since the last time it was crawled. To do that, select a crawl from the crawl list below the search text box. Once a crawl is selected, a checkbox will show that allows you to restrict the search to pages that were created in the selected crawl.

Your results then should reflect any new pages, or pages that have changed since the crawl previous to the one selected. You can optionally enter a search term to narrow the results here as well.

See this blog post on the feature for further information.

Show me pages that are in one crawl but not the other

Sometimes you’d like to see the complement of a crawl, to determine what’s been removed between crawls. In this case, we build the search syntax like so:

crawl:"My Web Site - 2011-04-27 - 2011-05-28" AND NOT crawl:"My Web Site - 2011-05-28 - 2011-06-28"

To find out what exactly to put inside the quotes as the crawl name, you can copy the name of the crawls you are interested in from the crawl list below the search text box.

Note: Minimizing duplicates in your web site crawls enhances this report greatly. You can work with Nextpoint to build a customized SmartCrawl, which can filter out irrelevant changes between documents from crawl to crawl.

Show me the history of a page

One other common task is looking at the history of a page within CloudPreservation. By looking at the history, you can see what changed, and, depending on the feed’s crawl frequency setting, get a timeframe for when the page was added, updated or deleted.

To get the history of a page you need to peform a search based on the url of the page.

web_addresses:"http://www.mywebsite.com/terms.html"

This will return all instances of this page that exist in CloudPreservation.com. You can view each of the results and see how the page has changed through time, get an idea of when it arrived on the site, or when it was removed from the site.

Note: Again, minimizing duplicates in your web site crawls enhances this report greatly.

Hopefully you’ll find these tips and tricks helpful when searching your feeds within CloudPreservation.com.

Enjoy!

Read Full Post »

Previously in TrialCloud and DiscoveryCloud, there were two distinct mechanisms to import documents.  First as a single file or document and second as a batch containing multiple files and an optional load file. With our last release and the inclusion of S3 Folders these methods have been combined to share a common interface called “Import Files” while maintaining and building upon the original functionality.  Here’s how it works.

After selecting the file(s) to import on the Import Files page, there is an option to process as a container file or as a native file.

Importing Container Files

Choose “Container Files” when uploading a single file or folder that is being used only as a means of organizing and uploading the files it contains.

Container Import Results

Information from load file will be applied

Only the contents of the file or folder will be processed, indexed, and included in the case or review, not the container file itself. If a container file or folder is found to contain a Nextpoint load file, that load file will be applied as part of the import.

Importing Native Files

Native File ImportChoose “Native Files” if you are uploading multiple files or folders, or if uploading a single zip file or single folder that is itself evidence. All files, including load files, found as part of processing the selected items will be processed as native files and the information in the load files will not be applied.  So be careful to use the Container Files option if you are looking to utilize a load file.

Native Results

Native files will be processed as if they are Evidence

These improvements, along with the addition of S3 Folders, have streamlined the Import process and provided more flexibility in importing files to your Nextpoint TrialCloud or DiscoveryCloud repositories.  As always, we welcome your comments and feedback.

Read Full Post »

Older Posts »