In the last post, we had finally finished gathering all of the metadata ingredients required for our recipe, and we were now planning on our cake’s final layer: the permissions algorithm. Of course, we could simply code the algorithm in the flowchart displayed beforehand in Part 1, using the metadata in a few modules to enforce our security paradigm. And then we could call it a day. Sure, we could finish there…but that would be an antithetical conclusion to all of the work done up to this point. After all, a major argument for MDD is to reduce the need for recompilation and/or redeployment of software; we’ve already used it to create an unique ORM framework that has been leveraged by such a design. So, instead of cementing these rules of our permissions algorithm into our code base, could we make them more dynamic as well?
As it turns out, yes, we can. In a previous InfoQ article, I described how one could create a business rules engine driven by a simple pidgin, a black box with a lexicon whose terms are determined by the very same metadata that drives the main infrastructure of the architecture. In addition to business rules, we can take advantage of the same approach when we want to implement the permissions algorithm, and for the sake of clarity, we will call this black box a permissions engine. We can then compose an enterprise DSL that incorporates both the Attributes and the contextual data describing our records. Since it should be comprehensible to a literate business user, we should design it with the priority of being easily reviewed and maintained by various stakeholders. As an example, let’s handle the scenario depicted in the flowchart from Part 1 by creating and showcasing a snippet using such a DSL:
In this sample (and much like the scenario described in the InfoQ article), our engine expects two records to be supplied in order to successfully execute the permissions algorithm. There is the Current instance of a record and its Attributes on the system, and there is an Incoming instance of the record with new data (which has been submitted by IncomingUser). While the overall system has the responsibility of loading and then passing along these relevant records, it becomes the duty of the permissions engine to understand and then enforce the rules of this presented sample. In this case, our permissions engine will use both records and any potential contextual data to determine whether or not to save the incoming “Price” Attribute.
Even though it is possible that the “Price” Attribute deserves special consideration, it’s more than likely that we will want to treat most or all of these Attributes in the same manner when processing a single record. In that case, we could use this sample as our default algorithm and wish to associate it with many (if not all) of our Attributes. By substituting “Price” with a placemarker that will eventually be replaced by our target Attribute name, we could reuse this same DSL snippet above and eliminate the pointless redundancy of multiple copies. However, if there are indeed special cases, we still have the option to create separate algorithms. All of these algorithms can then be stored as documents within an optimal retrieval database (like MongoDB) and pulled later by the permissions engine during its initialization. With just a bit of organization, we are now able to create a truly customized permission system for each data point that is submitted to the system.
With our permissions engine, this subsystem will then determine the applicability of each data point and whether it has the clearance to be persisted into our database. On top of this important functionality, we can reuse this subsystem in a number of ways. It’s not outside the realm of possibility that certain qualified stakeholders might want a preliminary report on the levels of clearance for their data before submission, so that they can get a preview of its fate. (For those who are fans of the movie “Minority Report”, the term precog certainly comes to mind.) In some cases, such intel could prevent wasted attempts at saving certain payloads (which can be a time-consuming process of multiple validation and correction phases) and could give those stakeholders the chance to address whatever potential obstacles might block their way. For example, Bob Smith might have the price locked on a few records, and Sue Doe might need Bob to lift the lock temporarily in order to submit a few emergency updates. In that case, we could create a user-facing service that simply invokes this subsystem, in order to generate and return such a report to the calling stakeholder. In that way, Sue does not need to learn about this discrepancy minutes later, when she receives the final status of her submitted record batch; she can quickly get this information beforehand, saving valuable time.
At the start of this series, it was my intent to present the feasibility of creating a permissions subsystem using MDD. Hopefully, at this point, I have at least shown that. With some ingenuity, we could likely repurpose this subsystem for yet other uses. Better still, we can clone this subsystem and adopt it as a solution for other similar dilemmas. It would fulfill my youthful dream of applying such a method to a filesystem, and I still think that such a strategy is possible. However, I will leave the onus of such a chore to a younger version of myself with more spare time…or, better yet, to posterity.