Texas CoA Addresses Electronic Community Property and Invasion of Privacy

August 5, 2016

Reference:

Miller v. Talley Dunn Gallery LLC, 2016 Tex. App. LEXIS 2280

(Tex. App. – Dallas March 3, 2016) (mem. opinion)

(Cause No. 05-15-00444-CV)

Relevant Documents:

Memorandum Opinion:  March 3, 2016, Cause No. 05-15-00444-CV

Texas Penal Code 33

In this case, part of the original trial court’s decision determined that Talley Dunn and the Tally Dunn Gallery LLC had “established a probable right to recover on their claims under the HACA. [Harmful Access to Computers Act]”  [March 3, 2016, Cause No. 05-15-00444-CV, pg. 19]

In his appeal, Bradley B. Miller argues that, while he admits that he took screenshots of information contained on the phone, the screenshots do not qualify as “access” and that he had effective consent to do so because the cell phone was community property.  [March 3, 2016, Cause No. 05-15-00444-CV, pg. 21-22]

Texas Penal Code § 33.01(1) defines access as:

“to approach, instruct, communicate with, store data in, retrieve or intercept data from, alter data or computer software in, or otherwise make use of any resource of a computer, computer network, computer program, or computer system.”

Neither party disputes that a cell phone is a computer, and the appellate court found that in order to take the screen shots Miller necessarily HAD to access the the computing device, within the definition of the penal code.  [March 3, 2016, Cause No. 05-15-00444-CV, pg. 22]

Regarding his argument that he had effective consent to access the cell phone because it was community property, the CoA relied upon the penal code definition of ‘owner’ as:

“a person who:

(A) has title to the property, possession of the property, whether lawful or not, or a greater right to possession of the property than the actor;

(B) has the right to restrict access to the property; or

(C) is the licensee of data or computer software.”

Dunn used the cell phone on a daily basis, had the right to place a password on it (and had), and the court determined Dunn had a ‘greater right to possession of the cell phone’.[March 3, 2016, Cause No. 05-15-00444-CV, pg. 23]  Further, the CoA notes earlier in the opinion that “[N]othing in the Texas Constitution or our common law suggests that the right of privacy is limited to unmarried individuals.”  [March 3, 2016, Cause No. 05-15-00444-CV, pg. 20]

Interestingly, the court does not address the multiple licenses that are part of the software and operating system that users have to acknowledge and accept to use a modern cell phone.  I would expect that will start coming up as another layer to the definition of ‘owner’, though.

Accordingly, the CoA concludes that “the trial court did not abuse its discretion by determining appellees established a probable right to recover on their claims under the HACA.”  [March 3, 2016, Cause No. 05-15-00444-CV, pg. 23]


Weekly Highlights: September 17, 2012

September 17, 2012

Things You Might Have Missed Last Week

(Highlights in legal, forensics, and electronic discovery news for the past week)

Interesting Electronic Evidence Cases

Inhalation Plastics, Inc. v. Medex Cardio-Pulmonary, Inc., No. 2:07-CV-116, 2012 WL 3731483 (S.D. Ohio Aug. 28, 2012)

The defendant inadvertently produced almost 350 pages of email. Even though, after in camera review, the court found that many of the produced materials were “within the ambit of attorney-client privilege”, the court found that privilege had been waived.

Weekly Highlighted Article

From E-Discovery Beat:

Experts Consider E-Discovery Implications of New ABA Ethics Rules Amendments

From BowTieLaw.com:

Forensically Examining a Lawyer’s Computer

Electronic Evidence News

Twitter Gives Occupy Protester’s Tweets to U.S. Judge

Court Issues 20-Year Product Injunction in Trade Secret Theft/eDiscovery Sanctions Case

Samsung Flexes Litigation Muscles at Apple Ahead of iPhone 5 Launch-Again


Eight Strategies To Control Information Forensic Costs

April 12, 2011

I’m often told that the biggest barrier to introducing information forensics to a potential case is the cost of doing so, and I believe it.  It is hard to explain to a client that they may expend resources with no return on the expenditure, and yet effective use of information forensics can be a valuable part of case strategy.  Here are eight strategies to effectively control information forensic cost:

  1. Prioritize Systems. In cases where there are multiple computer systems, hard drives or electronic devices involved, try to identify which ones are more likely to contain key evidence or facts in the case.  Your expert should be willing and able to help you do this, based on the facts of the case and the role of the devices involved.
  2. Image and Hold. Perform forensic imaging of the systems and devices involved to preserve them, but unless there are other factors involved you may not need to do analysis on ALL the systems at once.  Start with the high priority systems, and then see if there is likely to be value on the other systems or devices involved.  “Image and Hold” can also be an effective early strategy for a single computing device as well.
  3. Be Selective. We are often approached with multiple cell phones and hard drives.  One of the first questions I ask is if the cell phones were potentially backed up on one of the computer systems.  If so, then we can often process the backup (or “synch”) of the cell phones just as though we had the cell phone itself.  This helps to prevent duplicating cost.
  4. Evaluate Before Analyze. Full disclosure: This is a self-serving statement, in that Vidoc Razor runs a flat-rate evaluation service, but that doesn’t make it any less true.  Your expert must be able to provide an evaluation of the computer systems involved to identify which devices are useful to a case, versus ones that are redundant or don’t contain case useful information.  Make sure that the evaluation is  in context with the case, and not a simple cookie-cutter print-out of log files.
  5. Look for Flat-Rate Services. I have heard many complaints of forensic costs that run wild because of hourly rates.  It isn’t hard for a forensic service to provide cost-effective, flat rates that still provide high-quality results.  Your expert should be interested in looking for a long-range relationship as part of your legal arsenal, rather than getting rich off of a single big case.
  6. Understand the Differences Between Data, Information, and Intelligence. This seems like semantics, but it really isn’t.  Data is a stream of un-evaluated, un-interpreted symbols.  Information is what data becomes once it is useful (in context).  Intelligence is what information becomes once it becomes fact.  Once you stop thinking about “data forensics” and start utilizing “information forensics” you can find all three in a variety of places beyond the hard drive, or as a supplement to the evaluation or analysis performed on a hard drive or cell phone.
  7. Know Your End-Game. It is easy to get caught in the flood of information that can open up in the effective use of information forensics.  It is equally easy to chase down information that doesn’t necessarily support your overall case strategy.  For each new  tributary that opens up to you, ask yourself if it is actually something that supports your end-strategy, or potentially alters it.  If not, then why spend resources to chase it?
  8. Take a Deep Breath. If I had a nickel for every time I have heard the phrase “I am completely computer illiterate”, I would be living on easy street.  In a Yogi Berra-esque way: “This ain’t rocket surgery.”  For some reason the mere exposure to electronic investigation causes people to shut down.  While information forensics can be very technical, I promise you that the average attorney has dealt with much more complicated issues.  Take a deep breath and enjoy the new strategies and brand new streams of information that open up to you and your client and augment your ability to argue your cases.

Next Post:  Effective Information Forensic Strategy


Stripping Anonymity From the Internet

January 13, 2011
Stripping anonymity is like peeling an informational onion. It is about tying together otherwise benign pieces of information that, in the aggregate, allow you to identify, uncover, and infer the existence of other pieces of information. 

Pieces of information across the internet can be pulled in from so-called “Dark web” sources (sounds sexy, right? It actually just refers to information that is contained in databases that are not indexed by search engines), public records, search engine indexed information, metadata information contained in posted documents (photos, PDF docs, various graphics formats, etc.), online newsgroups, social media sites to name a few.

Using these pieces of information to uncover locations, associations, activities, behaviors and motives is entirely possible (and, in fact, is done every day in active investigative work), but not in every case. As you may imagine, it is easy for the thread to get broken and for a logical disconnect to occur. The trick is to combine inductive and deductive reasoning with the real information you find, and then to develop theories about other possibly available pieces of information and test those theories.

At a certain point any investigation, electronic or otherwise, will likely require “boots on the ground” to verify assumptions.

For your reading pleasure I’ve provided a link to a popular story back in 2006 about the accidental release of “anonymous” search results by AOL and the subsequent work done by a NY Times reporter in using aggregated information about search queries to strip anonymity.

http://select.nytimes.com/gst/abstract.html?res=F10612FC345B0C7A8CDDA10894DE404482

Wikipedia entry on the same incident:

http://en.wikipedia.org/wiki/AOL_search_data_scandal

Mac != Automatically Safe (Take It From a Mac Fan!)

April 1, 2009

I love my Macbook Pro – ask anyone who knows me.

Before you Windows users leave thinking that this is YAFBR (Yet Another Fanboy Rant) you should all know that I believe strongly in using the right tool for the job – which does not always mean using the trusty Macbook, and sometimes using MS Windows instead (lord help me, but I said it and there is no taking it back).  Sometimes it involves Linux or a BSD variant.  I love them all for different reasons.

I am concerned with the number of Tweets I saw related to Conficker this morning that stated (not an exact quote) “Thank goodness I have a Mac – it is safer than a PC…Macs never get viruses” and other sentiments that denigrated MS Windows in a sometimes more, sometimes less manner.

Before you get too happy consider the information discussed at CanSecWest last week and published by Milw0rm prior to vendor notification.  Check it out here (but come back!)

It is important to note that you are only as safe as your habits and software, Apple system or not.

I have worked a number of Apple forensics cases involving intrusion and interception of electronic communication.  In each case the firewall was turned off (the user wasn’t aware of how to control the firewall) and there was an astounding lack of logging (the user didn’t know how to control or review logs on the Apple system).

I can also tell you that the number of these types of cases is definitely on the rise.

Here is a quick test:  If your Apple (you can also insert Linux, *BSD, Windows) system was potentially compromised, how would you know?  Can you pull up, right now, failed connection attempts, firewall logs, running process logs?

If not, then take that as your sign and make sure to get yourself battle-ready.

As bot-nets become more and more prevalent if you are not part of the solution you are truly a large part of the problem.


YouTube Struggles With a Wretched Hive of Scum and Villainy

March 5, 2009

Information Week Article: “YouTube Wrestles With Scammer-Generated Content

InformationWeek reports that YouTube is “struggling” with posted videos showing such things as stolen credit cards, PINs, etc.  They go on to talk about how difficult it is to screen video content.

A single line mentions that meta-content can be used for screening (searching for keywords that can identify the content), but a YouTube spokesman goes on to say that they rely “on our community to know our community guidelines and flag content that violates the guidelines.”

First of all, the type of community that will be looking for that niche content isn’t going to be all that quick to flag it.

Secondly,  how hard would it be to build a signature base of meta-word and behavioral screening to remove the largest portion of objectionable (illegal) content?  Here are a few ideas to think about as you read the article – feel free to post your own:

  • Spam assassin for content anyone?  Use the meta data to help weight the red flag.
  • Watch topics that users post to/visit and use this to weight a flag.  For instance, a little old lady that is concerned about “poodles” and “identity theft” will not affect the weight as much as someone looking for “Free credit card numbers” and “MS Windows licenses”.
  • Use Natural Language Processing techniques to identify and weight actual posts (remember the “StupidFilter“?).

I realize full well that these techniques can be gamed just like anything else, but it seems to me that they are viable, not so hard to implement (I use components of them in my work – although the scale is different!), and a darn spot better than relying on the crooks to report themselves!


The Top 5 Biggest Infosec Lies

March 2, 2009

I have compiled a list of what I believe are the biggest lies told by and about infosec.  Let me know if you have an addition to the list!

5. There is no evidence that the data has been misused….

This lie is typically told by a company that has just had their digital posteriors handed to them.  The first question that I want to ask upon hearing this one is:

“So… wait… you were completely unable to detect the intruders that were playing around in your own systems for 3 or 4 months, but now all of a sudden you can tell across the entire globe if the information is being misused?”

4. It was a sophisticated attack….

The biggest problem is deciding if this lie is being told by the party that was breached, or the media.  For some reason the media classifies everything as “hacked”, even when it isn’t.  You can add to this that the party that has been breached has two things working against it:

1.  Who wants to admit they were breached by something stupid?  If you are going to be breached you want it to be the most sophisticated, complex attack known to man.

2. The “mouthpiece” for the organization that was breached likely doesn’t understand the technical issues themselves.

3. Of course it is secure – the (Military/Law Enforcement/Government) uses this, so it has to be….

I was asked by a client to sit in a product demonstration not too long ago, and the vendor’s mouthpiece kept harping on the fact that “This is so secure, NASA uses it!”.  They were more than a little crestfallen when I demonstrated for them that they were sending their username/password in Base64 decoded format for the entire world to see – and then the page was moving to SSL encryption (on an expired certificate).

The lesson here?  Just because no one has questioned it, doesn’t make it secure.

2.We have “Insert favorite technology here” so we know we are all set….

My first response to this usually is: “Tell me/Show me the specific policy/procedure that your favorite technology is in place to support.  What about the policy and procedure that governs support of the technology?”  The largest portion of the time an organization is completely unable to do this simple exercise.

Infosec technology that does not support policy and procedure is pretty much meaningless – at best you have wasted money, at worst you have created yet another attack vector through a mis-managed, poorly understood device.

1.  We are compliant with (HIPAA, GLB, Sarbannes-Oxley, PCI, etc.) so we know we are secure….

Ummm… so was Heartland….  Do we really need to go down this road?