Home

A Case for "Digital Rights Assistance"

(and Against "Digital Rights Management")

by Tom Lord, 19th September 2008

Executive Summary

The paper begins with a tediously detailed list of definitions, so that we can speak with a modest level of precision. This is followed by the arguments for "DRA" and against "DRM".

One can make sense of this paper even if one skips the details of the definitions and uses only these two summaries:

Summary Definition: "DRM"
A computing system that looks at coded licensing data in a media file and refuses to operate on that file in some cases, depending on the contents of the meta-data. That is: a computing system that restricts what data it will process in an attempt to help enforce restrictive licensing terms.
Summary Definition: "DRA"
A computing system that looks at coded licensing data in a media file and offers the user information about the that data. Like DRM, the actions of a DRA system can be triggered by "what a user does". For example, a DRA system might offer the user licensing information of a font whenever the user attempts to print using the font. Unlike DRM, DRA leaves the user in control and does second guess the user's honesty or criminality: DRM would prevent printing in some cases; DRA will never prevent printing. DRA only advises the user, it does not arbitrarily refuse to process certain data.

After the more detailed definitions, this paper argues for "DRM: bad / DRA: good" by showing that:

  1. all system containing DRM necessarily contain bugs
  2. these bugs matter to such a great extent that DRM ought to be illegal (DRM endangers the civil rights and even the lives of users)
  3. DRA is what is obtained by removing the bugs from DRM
  4. DRA can be effective "in the real world" for both users and licensors

Defining Terms and Introducing "Digital Rights Assistance"

Licensable Expressions

"Licensable Expressions" are expressive works which are subject to copyright, trademark, or patent law. These are commonly described as "Intellectual Property" but that term leads to certain confusion for these works do not constitute a singular type of property nor are they encompassed by the adjective "intellectual". We prefer the term "Licensable Expression" to focus on two objective characteristics that we care about: that the works are expressions rather than physical artifacts; and that the works are subject to licensing under copyright, trademark, and/or patent law.

Digital Rights Expression

A "Digital Rights Expression" is a legally significant, human readable expression asserting the permissions granted by and rights retained by the holder of a copyright, trademark, or patent. The term "Digital Rights Expression" was coined by the Creative Commons.

Digital Rights Coding

A "Digital Rights Coding" is a machine readable model (perhaps incomplete) of a Digital Rights Expression that accompanies the Licensable Expression to which the Rights described apply.

For example, a font file format might include a bit that, if 0, indicates the font is permitted to be used for printing but, if 1, indicates that the licensing of this copy of the font forbids printing. The bit "encodes" in machine-readable form a significant detail of the Digital Rights Expression that applies to the font. Thus, we say it is a "Digital Rights Coding".

Digital Rights Management

A "Digital Rights Management" system is an arrangement of hardware and/or software which attempts to enforce restrictions described by a Digital Rights Coding.

An example to illustrate a "restriction" is a bit (the "coding") within a font file that, if set to 1, indicates that the user may view the font on-screen but not print using the font.

By "enforce" we mean that the computing system will decline to perform certain computations or actions solely because of the coded restrictions.

For example, a program might be perfectly capable of printing using a particular font yet refuse to do so because the "right to print" bit is set to 1.

Well Defined Interaction

A "Well Defined Interaction" is a machine-recognizable class of user interaction events which may occur during the execution of a program and which the program is able to respond to by producing output available to users.

An example of a Well Defined Interaction is the class of events "the user clicks on the button labeled FOO":

The program can (obviously) recognize when such an event takes place -- so this is a machine-recognizable class of events.

The program can (obviously) produce output (such as popping up a dialog box) which is reasonably expected to be available to the user.

Those conditions satisfied, "clicks button FOO" counts as a "Well Defined Interaction".

Informative Digital Rights Coding

An "Informative Digital Rights Coding" is a Digital Rights Coding with an additional property:

Like a Digital Rights Coding, an IDRC contains a machine readable model of some aspects of a Digital Rights Expression. However, an IDRC is explicitly non-normative, as explained further, below.

Additionally, an IDRC specifies a set of Well Defined Interactions and with each such interaction specifies a "response" suitable for formatting as output for a user.

An IDRC is thus a kind of simple program consisting of "if then" statements. A typical IDRC program might say:

IF the user pushes the "print" button, THEN output a reminder that this copy isn't licensed for printing

Note a key difference between a Digital Rights Coding and an Informative Digital Rights Coding: DRC is normative whereas IDRC is simply informative.

For example, the "permission to print" bit in a font file is normative in the sense that a program operating as specified MUST not print if the bit is set.

In contrast, the IDRC is informative in the sense that an IDRC rule ("if the print button is pushed, offer message X") specifies a program that MAY inform the user of data that is perhaps relevant to the user. The IDRC rule does not force the user to see the informative message. The IDRC rule does not force the program to refuse to print.

Digital Rights Assistance

The centerpiece in this essay is "Digital Rights Assistance":

A "Digital Rights Assistance" system is an arrangement of hardware and/or software which is able to interpret IDRC programs according to the preferences of the user.

Note that DRA is different from DRM. A DRM system would refuse to print the document. A DRA system may still print the document; it simply also produces output that can notify a user of a possible license violation.

Note that an important part of the definition is the phrase "according to the preferences of the user". A program is a DRA program even if it permits the user to "turn off" notifications generated by interpreting IDRC data.

Digital Rights Assistance

DRM Has a Bug

In a common sense way, "DRM has a bug". We mean that all forms of DRM, actual or future, have this bug -- it's built in to the very concept of DRM:

The rights of a user to use a given licensed work are determined by a combination of the user's circumstance, the work's digital rights expression, the relevant law, previous jurisprudence regarding the concerns of the license, and the future jurisprudence that pertains to the user's actions.

An example of "future jurisprudence" is what happens if the user takes some action and winds up as a defendant in court. In that case, the ultimate legality of the user's actions are determined, ultimately, by the choices of jurists, who are expected to consider the totality of the laws, the precedents, and the particulars of the circumstance.

There are, therefore, no fixed set of rules which accurately determine, in advance, when a user's proposed action upon some data is legal or not legal. This is essentially a simple consequence of the definition of "legal".

To be concrete: if a bit in the font file "says" that a user may not print using that font, does that mean the user may not print using that font as a matter of law? No, it does not. Printing may be perfectly legal, regardless of the "control bit".

Now, suppose it were the case that DRM does not have a bug. For example, suppose it is not a bug when a program refuses to print because of the control bit.

We'll see swiftly that this leads to a contradiction (and, indeed, that we can almost always demonstrate that a bug exists):

If DRM has no bug then it is possible to implement DRM in a way that always permits a legal use of the licensed data. If a DRM system prevents a legal use, in common sense usage of the terms, it has a bug.

Some would try to establish the perfection of DRM in a simple-minded way by writing a license or contract that says "licensed for whatever uses and only those uses the software permits" -- i.e., so that the DRM system is alleged to be bug-free "by definition". If the control bit is set to 1, such a license or contract would say (in effect), then the user lacks the right to print.

And yet, a court is not obliged to agree nor are all jurisdictions obliged to agree. The "control bit" might be a good hint about the user's rights -- but by definition it is not definitive.

Yet if bug-free DRM exists, then it has the form of a fixed set of rules that can compute the legality of any future user action with the data in question. And that's a contradiction. The legality is not computable for it depends on the jurisdiction and on jurists.

We know, by the definition of "legality", that no such fixed set of rules can possibly exist. Therefore, bug-free DRM does not and can not exist.

Any implementation of DRM that claims to be "bug free" is a lie.

DRM's Bugs Matter

DRM, by definition, blocks users from certain actions that are otherwise technologically "natural". For example, the system might refuse to print using a given font even though all of the data is present that's needed to print. A system might refuse to play a video or audio file for similar reasons.

Because of the DRM bug, the realistic potential existence of users who are denied a legitimate use and suffer damage is assured:

If no fixed rules can accurately decide what to prevent, yet fixed rules prevent some actions -- it is always easy to envision realistic scenarios in which the rules block a user action that is at least the user's civil right and quite possibly a matter of life and death.

Continuing our font example, one can imagine a cluster of first responders, after an earthquake, desperately needing to print out a certain inventory list for a nearby warehouse -- only to be thwarted by a well meaning but legally inaccurate DRM system. Lives may hang in the balance. Printing the inventory using the font may be perfectly legal, causing damage to nobody. And yet a DRM system would needlessly refuse to do so if the relevant control bit is set or defined misleadingly.

Conversely, the absence of DRM introduces no such dire bugs. A user can not violate his own civil rights by using data in an unauthorized way. A user can not suffer death because he was able to access licensed information in a time of critical need.

No responsible engineer ought to implement DRM. It ought, in fact, to be against the law to do so because of the life-critical risks it imposes upon users, not to mention the civil-rights threats it imposes upon users.

Repairing DRM Yields DRA

DRM's bug follows from two aspects of DRM:

  1. DRM attempts to define rules for legally significant events but by definition no such rules exist. For example, DRM says "when printing" but what it really means is "when printing unlawfully". No program can define "unlawful printing" so DRM makes the mistake of talking about "printing" instead of "unlawful printing".
  2. DRM involves forbidding technologically reasonable uses of data, thus inevitably causing harm to users. The firemen who can't print the warehouse inventory, described above, are a fine example.

What happens if we start with the idea of DRM, but remove those problems:

  1. Don't make rules to (try and fail to) identify legally significant user actions. Rather, limit rules to "Well Defined Events" -- points in user interaction that are trivially well defined and computable (such as "the user pushes the print button") and upon which a program has the opportunity to provide output to the user.
  2. Don't apply rules to "switch off" otherwise straightforward functionality. Rather, just make a reasonable effort to inform the user of the possible significance of what he or she is doing.

With those changes, what is left?:

The resulting content protection system is one which accepts as inputs (as in a media file) a description of well defined events (e.g. "user issues a print command") and, for each such event, specifies output which a user may wish to receive (e.g., a warning that a font file does not appear to be licensed for printing).

Such a system is exactly what we have defined as "digital rights assistance": a way to annotate content with rules like "if the user tries to print, offer the user this warning...".

In contrast to DRM, DRA fails safely by not blocking the user from any action that might be perfectly legal (and important, perhaps even life-critical). E.g., the "first responders after an earthquake" can print the needed warehouse inventory, even though the software warns of a possible copyright violation.

In contrast to DRM, DRA is well defined. Programmers can get DRA right, whereas (because "DRM has a bug") programmers can not get DRM right no matter how hard they try. This is because DRA refers to Well Defined Interactions rather than to legally sigificant interactions.

Can DRA "Work" in the Real World?

Many holders of legal rights to Licensable Works believe strongly that DRM is both reasonable and inevitable. They "count on" (e.g., in business plans) systems of DRM being permanently established.

Here, however, we have pointed out simple and non-controversial reasons for rejecting DRM -- reasons why no systems builder ought to implement DRM. We've pointed out that DRM will inevitably harm the civil rights of some users, and perhaps even deprive some users of life.

What then, of Licensable Expressions whose authors are counting on DRM? Ought those works be withdrawn and generally unavailable in digital form? Or might DRA, Digital Rights Assistance, be protection enough to justify publication of those works in digital form?

A cynic might argue (and probably correctly) that if users are not technologically prevented from violating licensing terms then some users certainly shall. The cynic might additionally argue that while users might find ways to cheat under DRM, far fewer will cheat under DRM than DRA.

There are reasons to doubt such cynical claims but let's stipulate that they are true.

Even so, nothing in those claims justifies DRM. No systems engineer should create human risk of the sort that is the DRM bug. DRM is not an ethical option.

And nothing justifies the withholding of content: for digitization and cheating will occur in all scenarios and thus it is up to licensors of content not to find a system of publication that prevents cheating, but to choose a system that tends to maximize the amount of "not cheating".

DRA, digital rights assistance, is a close cousin of DRM but without the bug. It is difficult, at best, to envision a system that would do a better job of maximizing license compliance without being DRM.

In other words, DRA is as good as it gets and "as good as it gets" is the legitimate, economically rational goal for restricted-content licensors.