Archive for the ‘Deep Thoughts’ Category

I’ve been thinking a lot about the term e-discovery and how much I’ve grown to dislike and distrust it over the years.  At best, it’s an unnecessary term.  At worst, it’s a ploy to compound the complexities of an already expensive and time consuming process.  It’s led to an onslaught of other made up terms that arise to confuse us on a daily basis such as my personal favorites ESI (Electronically Stored Information) and ECA (Early Case Assessment).  Do we really need to memorize acronyms for those?  It has completely distracted us from the core discovery process, which I would say is as simple as identifying, reviewing, and producing relevant information.  Let’s explore.

First off, is it still necessary for us to qualify everything in our lives as electronic?  Did you listen to e-music today during your morning commute?  Clearly a very small percentage of the music we listen to these days is analog.  And clearly there is a very small percentage of discoverable information in paper form.  It’s just discovery but the format of the information has changed.  Music is just music and so it goes with discovery.

Why does it matter?  You might be thinking that adding the “E” doesn’t hurt anything, but in my experience it opens the door to a slew of unnecessary complexities.  An industry has been unleashed to scare us all about the challenges of e-discovery.  It’s true that the amount of discoverable information has grown exponentially because of technologies like email, multi-media, and social media but then again the tools for managing that information greatly offset those challenges.  Honestly, would you rather have to parse through 10,000 pages of paper or 1,000,000 pages of electronic information hooked into a search engine and available anytime via an internet connection?  I’d take the million.

The other issue I see is that e-discovery has been used as an excuse to tack on additional unreasonable complexities to the discovery process.  There is clearly an important place for forensics experts in the industry, but haven’t things gotten out of control?  Asking people to review the full forensically sound chain of custody and history of a document would be like dusting and fingerprinting every single piece of paper you find in a file cabinet.  And then producing that along with copies of the document to the other party.  It’s just not reasonable.  E-discovery has opened the flood gates for people, mostly our technology peers citing some pseudo-technical expertise, to scare people into investing far more resources than necessary to manage the discovery process.

I’m sure you can think of many more examples.  It took me a long time to distill the simplicity of this discovery process but these days I’m resolute in breaking down the false complexities that this industry has created.  Granted, for many matters, executing on discovery will be expensive and time consuming.  Mostly due to the volume of information that needs to be reviewed.  But the technical complexities really aren’t so bad.  It’s a process as old as our legal system and as a simple as identifying, reviewing, and producing relevant information.  Sorry, I mean e-information.

See Rakesh’s take as well on Frank from ’08.

Read Full Post »

The news filtering back to us from LegalTech (#ltny on Twitter) is “Cloud”s the word.  As you’d imagine, for a company committed to bringing uncapped storage and processing to the legal industry, it’s music to our ears.  Only trouble is, we’re a little worried that there are a lot of folks out there still trying to nail down what exactly “Cloud” means.

We tend to agree with a lot of the NIST definition of “Cloud Computing”.  Let’s take a look at their 5 Essential Characteristics.

1. On-demand self-service.

A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service’s provider.

We take this one step further, handling the launching and monitoring of resources for you.  A large-scale action on your part, results in new servers launching to take care of the heavy lifting with maximum efficiency.

2. Broad network access.

Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).

You can access your data from anywhere an internet connection is available.  Be it via a traditional browser or by phone.

3. Resource pooling.

The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, network bandwidth, and virtual machines.

Our data is stored in some of the most reputable data centers in the world.

4. Rapid elasticity.

Capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.

With Nextpoint, you have uncapped access to storage and processing power.  Uploading large amounts of data will result in a large number of servers starting up to store your documents, index them for search, create preview images for the web and presentation, etc.

5. Measured Service.

Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service.

Don’t pay for server capacity that you rarely, if ever, actually need.  Don’t pay for the software on the machines or their upkeep.  Let us take care of the technology so you don’t have to.  Let Lawyers Be Lawyers

Read Full Post »

1. Security
I’m just going to start with security because it’s probably one of the most common “arguments” we hear against moving to the cloud.  The truth is when you move to the cloud, you will greatly enhance the security of your data immediately.  Is your organization undergoing regular voluntary audits for SAS 70 certification?

In an independent TechnoLawyer review of Nextpoint, Brett Burney said “… data is unquestionably more secure on the highly-encrypted, highly-secured server farms under Amazon’s watchful IT army than an old, out-dated server sitting in the broom closet of a law firm.”  I concur.  Even worse, are you letting people tote around sensitive client data on laptops and smart phones?

2. Disaster Recovery and Business Continuity
We often hear law firm technologists claiming that storage is inexpensive.  And it’s true that you can go to Best Buy and buy a Terabyte hard-drive for $99.  It’s also true that a USB hard-drive with a terabyte of data on it is a disaster waiting to happen.  If you take your data seriously, you know you need to have it geographically redundant, not simply backed up.  You need it in a secure physical location at all times.  You need RAID configurations.  And the list goes on.

We’re all aware that disasters happen, but there is no reason to put your business or livelihood at risk.  Not when there are companies readily offering a service that will allow your business to be up and running from a hotel with a laptop.

3. Scalability
Internally at Nextpoint, I’m pretty certain people are tired of hearing me talk about scalability.  I’m fascinated by Moore’s law which “describes a long-term trend in the history of computing hardware, in which the number of transistors that can be placed inexpensively on an integrated circuit has doubled approximately every two years.”  In other words, computers double their power every two years.  It’s incredible, and it also applies to storage.

Some people interpret this to mean that storage and computational power are cheap, so they buy hardware.  Unfortunately, what happens in practice is that we are creating exponentially more data every year to take advantage of that new power.  Remember when Hotmail shocked the world with 50MB of free storage?  I have 25GB of storage on my gmail account.  If you buy X storage today, by the time you plug it in it’ll be too little and you will have paid twice as much as you would tomorrow.  Spend all you like and you will get no closer to keeping pace.  You are competing with Google, Amazon, and Microsoft.  These companies are building physical data-centers with walls made of hardware.  You can’t beat them… join them.

4. Cost
Cloud computing is also sometimes referred to as utility computing.  The key things to take away from that are that you can use as much as you’d like (scalability) and that you only pay for what you use.  200 years ago a factory might set up shop next to a river as a means to generate power.  I’m not going to say that’s outdated or bad.  With power I actually think it’s brilliant.  Because it’s not likely that a company’s power needs will double every two years, so you likely won’t hit your limit.  But with your computational needs, it’s unquestionably a bad idea as we explored in the scalability discussion.

So here is the deal with cloud solutions.  You pay for what you need and nothing more.  Regardless of how the cloud service charges (by user, by data, or both) you will pay for only what you need.  That means no large up front expenditures.  And for you law firms, it probably means it’ll be easy for you to track your expenses directly to matters.  It certainly does if you use Nextpoint.  So not only is it less expensive, it’s easier to track and pass-through to customers.

5. Focus
Put simply, you aren’t in the technology/software/IT business.  We always say LLBL.  Let Lawyers Be Lawyers.  We’re happy to report that we got out of the email business a long while ago and went to the cloud.  And you know, we got by okay before but Google is much better.

Read Full Post »

We’d like to thank Dave Schaaf for documenting his experience moving from last generation litigation technology to the cloud.  It’s a transition that we foresee everybody making soon enough, but he’s a trailblazer with a perspective that we value greatly. Thanks Dave!


I’ve been involved in the legal industry for 7 years now, having started out with a jury research and trial consulting firm, before joining Nextpoint. With my old company, I learned the Trial Director platform, becoming a certified trainer, and using it while manning the “hot seat” on major trials. It was a bit cumbersome to learn, but I got to the point where it was a comfortable, albeit frequently frustrating application. Having this esoteric expertise was my livelihood, so I was not exactly open to changing to a new, unfamiliar technology.

When I came to Nextpoint, the Trial Cloud, was in its infancy, and during my first trial with it we only used the deposition designation capabilities, which were amazing. It allowed members of the trial team to create and revise deposition designations, which I could then access to generate a cut list for the depo video, as well as PDF designation reports that could be exchanged with the other side. It represented a sea change from the old school methods of manually highlighting transcripts in various colors of highlighter, or typing in each page/line reference in an Excel spreadsheet. I was pretty impressed with this new web-based platform, but I had no idea the robust future it had.

Screenshot of Trial Cloud's presentation tool - Theater

Screenshot of Trial Cloud's presentation tool - Theater

The next trial I worked on was the first where we used the document capabilities of the Trial Cloud. I had some trepidation with relying on something other than my trusty local Trial Director database. I could understand using it for designations, because the traditional methods were cumbersome, but Trial Director works fine. It’s comfortable (at least to the few people that have mastered it), it ain’t broke, so why fix it? I begrudgingly eased my way into using the Trial Cloud to manage the trial exhibits, so that other members of the team could access them as they were admitted, and I soon saw the advantages of having all the data available to the entire team via the Internet.

My third trial using the Trial Cloud marked a significant advance. While the old guard such as Trial Director and Concordance were rolling out modest upgrades to their decade old platforms, Nextpoint was radically improving the capabilities of theirs. And the best part was that I didn’t need to install anything, I simply logged in and it was there. We had over a million pages of documents we needed to potentially access in court – enough to bog down my local Trial Director database to the point that searching for a particular Bates number could take five minutes, which is an eternity in court, and unacceptable when 100 people in a courtroom are staring at you to make something appear on the big screen. Fortunately, we were armed with a T1 connection in court, so after some frustrating, tense moments dealing with Trial Director I ventured into using the Theater component of the Trial Cloud to display my evidence in court. I was very skeptical that it could replace Trial Director. It didn’t have nearly as many bells and whistles. It seemed so basic. But I found that the brilliance of Theater is that Nextpoint took the few tools that you use 95% of the time in Trial Director, and incorporated them into Theater, and left out the superfluous stuff. The result is that you have the ability to create call out zooms, highlight, underline and redact documents. You can save your treatments so that you can effortlessly recall them later. I have found that the simple functionality is much more user friendly than Trial Director, particularly when dealing with document treatments. In Trial Director, you must go through hoops to bounce between a document and it’s saved treatments, but with Theater, you simply type a single key to recall a treatment, and a single key to clear it. So much easier!

Another immensely useful aspect of the Trial Cloud is how it makes collaboration so much easier. On my most recent trial, paralegals and associates were able to pre-treat the documents they wanted to use for witnesses, save all the treatments, and I was able to access them in court. True division of labor, rather than having multiple people stay up late to create them and add them to a local database. And when it came time to print exhibit binders for the jury, we simply exported the saved treatments, and included them in the binders (with permission from the judge) to direct their attention to the portions of the documents that we covered in the trial.

I got to enjoy the capabilities of the Trial Cloud in courtrooms where we had Internet connections. You might not be so fortunate, but never fear, the Trial Cloud has wonderful exporting capabilities, so that you can prep your evidence in the Trial Cloud, then export it to a Trial Director database. But with more and more courtrooms going electronic, and allowing or even providing Internet connections, the Nextpoint Trial Cloud is the wave of the future. I think it’s human nature to fear change, to be averse to trying something new. But if you check out the Nextpoint Trial Cloud, I’m sure you will find as I did, that it is a major leap forward in evidence management and trial presentation technology. Give it a whirl!


Dave Schaaf is a litigation technology expert for Nextpoint out of Los Angeles.  He has extensive experience with evidence management, courtroom presentation, and trial demonstrative development.  He’s manned the “hotseat” on major trials for clients such as Commerce Bank and ExxonMobil.  If you’d like to get in touch, Dave can be reached at dschaaf at nextpoint dot com or simply by commenting on this post.

Read Full Post »

2009 was ripe with additions and improvements for the Nextpoint web applications.  And rest assured we have an even bigger year ahead of us in 2010.  We’d like to thank all of our customers and the community of people who follow us on twitter, read this blog, and inspire us in our quest to build exceptional litigation technologies.  There’s a huge need out there and with the exponentially growing digital footprint that we all create, the challenges will continue to mount.

2009 Highlights

  • Discovery Cloud.  The world’s most efficient and powerful e-discovery platform lives in the cloud.  This release made processing, early case assessment, culling, redacting, deduping, bates stamping, and production practical and accessible and took discovery technologies from the 1980’s to the present overnight.
  • TechnoLawyer gave the Nextpoint Trial Cloud a TechnoScore of 4.8 out of 5.  It’s an unprecedented achievement in our industry.  The reviewer, litigation technology expert Brett Burney, asked the question “How can a ‘trial preparation’ tool look modern and stylish and be completely functional at the same time?”  We’re proud that we found the answer and grateful for the review.

There is so much to come in the year ahead including a couple huge near term announcements which we’re bursting to share.  If there’s one goal that we have above all else, it’s to grow our community starting with this blog.  We want your ideas for building great litigation technology and we mean it when we say that we’ll do our best to get every idea integrated.  Thanks!

Read Full Post »

Why is it that in all the introductions to cloud computing we read, the emphasis on processing power is unaccounted for?

The cloud is processing power.  The cloud is having 100s or even 1,000s of servers doing work for you for pennies on the dollar.  The cloud is about having access to an infrastructure that IT departments drool over.  The cloud gives you the ability to turn on servers like turning on all the lights at Wrigely, but at the cost of just a few incandescent bulbs.

So much of the cloud is about power.  We shouldn’t miss that point.  It’s a big one.

Read Full Post »

Naturally, I’ve been reflecting about the applications I’ve worked on prior to coming to The Lab, and comparing those applications with what we’ve got here. One of the things that always comes to mind is how the definition of web app can vary so vastly from place to place. I can say I’ve worked on them for the last 10 years, but I think this is the first one that truly fits the bill.

There are many vendors out there touting their software as a web app, but usually this software comes with some of the baggage you’d expect out of an old school client/server application. “One instance per client” apps where client by client upgrades and maintenance scale so poorly that vendors and clients alike spend so much time and money coordinating, the benefits of “access from anywhere” quickly wither away.

True Web Apps should deliver not only on the promise of “access from anywhere”, but also the promise of “update everywhere”.  From a consumer’s perspective it’s wonderful to save the worry of coordinating upgrades, from an IT department’s perspective it’s wonderful to get rid of the headache of having to maintain the environment, and from the vendor’s perspective it’s wonderful to have simple deployments of updates. It’s wonderful for all.

So, do you have a web app or do you have a Web App?

Read Full Post »

Everybody is concerned with limiting the amount of data that needs to be reviewed.  It’s the number one cost in e-discovery after all. Clearly, the best way to conserve costs is to review less and there is a technique that we software engineers use every day that can drastically reduce the amount of documents in review. A concept from CS101 called normalization. Essentially, it’s a mechanism for ensuring the integrity and efficiency of a database and provides a framework for specifying the degree in which redundant data needs to be stored. Our goal is to remove redundant data while providing an elegant means of searching and reviewing hierarchical document sets in context.

So here is the typical example we see in e-discovery review.

Email 1 has two attachments, Attachment 1 and Attachment 2
Email 2 has one attachment, Attachment 1

There are much more complex examples of this as well, but this should illustrate the point just fine. Attachment 1 is a duplicate file, meaning it is literally the exact same file, with exactly the same metadata, and the same md5 hash. The only difference is that in one situation it’s attached to Email 1 and in another it’s attached to Email 2.

Most review applications will keep duplicate copies of the attachment and group with the parent emails, essentially flattening the attachment with the email and creating multiple versions of the same pages.  If you choose (or are forced) to review Attachment 1 as two completely different documents in this manor as most common review tools require, you are going to be significantly increasing the time to review, tag, and code that document. Not only that, but you are going to increase the possibility of a redaction not being applied to all versions of that document.

That said, it’s clearly important to look at emails and their attachments together to get the full context of the material. Using the above example, we’ll add titles to each of the docs.

Email 1 Q1 summary has two attachments, Attachment 1 Q1_financials.doc and Attachment 2 Q1_goals.doc
Email 2 Here are the falsified numbers you asked for Jim, Attachment 1 Q1 financials.doc

Even if Email 1 and Email 2 get assigned to different reviewers and they both have to review Attachment 1, the redactions, tagging, and coding information will be available for the other reviewer making the process more efficient than working on that attachment from scratch. You’ve saved yourself a big chunk of time. And in an ideal situation, the same reviewer will already be familiar with that document and see the additional context of both emails during his review. As he’s reviewing Q1_finanicals.doc he’ll know that it was sent out in a Q1 summary email and also in an email announcing that the numbers were falsified. So it’s not only more efficient, it actually sets a greater context for the document and provides additional valuable meta data.

Unfortunately, most review tools that we’ve seen don’t support this normalized relational database technique but our hope is that as more review tools extend to support native file formats this technique for organizing review databases will be available for all.  Our simple and powerful review platform maintains the relationships between all documents and allows for flattening or “reduping” on export or production.  It’s an approach that really provides the best of both worlds.

Read Full Post »

How do we design new features at Nextpoint?  We identify the essence of a feature and get it in the hands of users.  Typically it goes something like this:

1.  We brainstorm… use our imaginations and throw out a lot of “wouldn’t it be cool ifs” without any real consequence.

2.  Identify the major technical hurdles.  Inevitably there are a few.  And that’s a great place to start from a execution standpoint.  Build the proof of concept and make sure it’s doable.

3.  Strip it down.  We’ve already brainstormed all of the details and complexities.  So we just peel ’em away until we are left with the absolute minimal amount necessary to get the core benefit of the functionality complete.  And recognize that we might be leaving a ton of other potential benefits on the table.

4.  Release.  Get it out there and in use as soon as possible.

5.  Improve it based on real-world user feedback.  Often times the ideas during the brainstorming phase come back… but surprisingly many of them never do.  And if they do, we already have an understanding of the concept and have discussed where it fits and the difficulties with implementation.

So basically, we branch out and explore the feature.  Let the scope fly wildly out of control but only on whiteboards and in our imaginations.  And then strip it down to the essence, get in use, and respond to real user feedback.

Read Full Post »

Let Me Out!

I recently made the switch from 30Boxes to Google Calendar. I was generally happy with 30Boxes but just had to give Google a try when it started grokking the Outlook invites I was receiving. The biggest question I had to answer before I made the leap was: “Just how big of a leap is this?”.

The fact of the matter is: Google Calendar was easy to join – and that’s great – but if it wasn’t easy to leave, I wouldn’t have given it a chance. I really like products that make canceling/leaving easy and I’m not the only one around here that feels that way.


If you want to switch your evidence system, it should be easy. It shouldn’t cost you some excessive amount of money and it shouldn’t cost you some excessive amount of time.

Likewise, if you want to “give a new product a chance” – you need to know that if you don’t like it, you can always dump everything back into the old system without wasting too much time and/or money – and certainly without losing any data. The last thing you need is one project stuck over in some tool that you didn’t like.

Of course, nobody wants you to leave their product and take your money elsewhere, but if they really believe in their product: They know you’ll be back.

Read Full Post »

Older Posts »