One of the most essential part of securing access to data, information, security, as well as computer organization is by having security policy. A computer security policy consist of a clearly defined and precise set of rules, for determining authorization as a basis for making access control decisions. A security policy captures the security requirements of an establishment or describes the steps that have to be taken to achieve the desired level of security.
A security policy is typically stated in terms of subjects and objects, given the desired subject and object there must be a set of rules that are used by the system to determine whether a given subject can be given access to a specific object.
A security model is a formal or an informal way of capturing such policies. Security models are an important concept in the design of a system. The implementation of the system is then based on the desired security model.
In particular, security models are used to
We assume that some access control policy dictates whether a given user can access a particular object. We also assume that this policy is established outside any model. That is, a policy decision determines whether a specific user should have access to a specific object; the model is only a mechanism that enforces that policy. Thus, we begin studying models by considering simple ways to control access by one user.
In this paper, we would briefly explain about two main security models that have already known and been used in securing a system. The two of them are BIBA and Bell La-Padula. Basically this two known system have been used widely in the world and it is essential for us as security technology students to understand and implement it in the future system. We highly hope that this paper can help the student to understand the security policy that being implemented by the BIBA and Bell La-Padula model.
The Biba integrity model was published in 1977 at the Mitre Corporation, one year after the Bell La-Padula model (Cohen ). As stated before, the Bell La-Padula models guarantees confidentiality of data but not its integrity. As a result, Biba created a model use address to enforcing integrity in a computer system. The Biba model proposed a group of integrity policies that can be used. So, the Biba model is actually family of different integrity policies. Each of the policies uses different conditions to ensure information integrity (Castano). The Biba model, in turn, uses both discretionary and nondiscretionary policies. The Biba model uses labels to give integrity levels to the subjects and objects. The data marked with a high level of integrity will be more accurate and reliable than data labeled with a low integrity level. The integrity level use to prohibit the modification of data.
The Biba Model consists of group access modes. The access modes are similar to those used in other models, although they may use different terms to define them. The access modes that the Biba model supports are:
The Biba model can be divided into two types of policies, those that are mandatory and those that are discretionary.
§ Simple Integrity Condition: s ∈ S can observe o∈ O if and only if i(s) ≤ i(o).
§ Integrity Star Property: s ∈ S can modify to o∈ O if and only if i(o) ≤ i(s).
§ Invocation Property: s₁ ∈ S can invoke s₂ ∈ S if and only if i(s₂ ) ≤ i(s₁ ).
The first part of the policy is known as the simple integrity property. The property states that a subject may observe an object only if the integrity level of the subject is less than the integrity level of the object. The second rule of the strict integrity property is the integrity star property. This property states that a subject can write to an object only if the objects integrity level is less than or equal to the subject’s level. This rule prevents a subject from writing to a more trusted object. The last rule is the invocation property, which states that a subject s₁ can only invoke another subject s₂, if s₂ has a lower integrity level than s₁.
The strict integrity policy enforces “no write-up” and “no read-down” on the data in the system, which is a subject, is only allowed to modify data at their level or a low level. The “no write up” is essential since it limits the damage that can be done by malicious objects in the system. On the other hand, the “no read down” prevents a trusted subject from being contaminated by a less trusted object. Specifically, the strict integrity property restricts the reading of lower level objects which may be too restrictive in some cases. To combat this problem, Biba devised a number of dynamic integrity polices that would allow trusted subjects access to an un-trusted objects or subjects. Biba implemented these in a number of different low-water mark policies.
§ Integrity Star Property: s ∈ S can modify o∈ O if and only if i(o) ≤ i(s).
§ If s ∈ S examines o ∈ O the i′ (s) = min(i(s),i(o)), where i′ (s) is the subjects integrity level after the read.
§ Invocation Property: s₁∈ S can invoke s₂ ∈ S if and only if i(s₂ ) ≤ i(s₁ ).
The low-watermark policy for subjects is a dynamic policy because it lowers the integrity level of a subject based on the observations of objects. This policy is not without its problems. One problem with this policy is if a subject observes a lower integrity object it will drop the subject’s integrity level. Then, if the subject needs to legitimately observe another object it may not be able to do so because the subject’s integrity level has been lowered. Depending on the times of read requests by the subject, to observe the objects, a denial of service could develop.
is similar to the low-watermark policy for subject. The policy states:
§ s ∈ S can modify any o ∈ O regardless of integrity level.
§ If s ∈ S observe o ∈ O the i′ (o) = min(i(s),i(o)), where i′ (o) is the objects integrity level after it is modified.
This policy allows any subject to modify any object. The objects integrity level is then lowered if the subject’s integrity level is less than the objects. This policy is also dynamic because the integrity levels of the objects in the system are changed based on what subjects modify them. This policy does nothing to prevent an un-trusted subject from modifying a trusted object.
The policy provides no real protection in a system, but lowers the trust placed in the objects. If a malicious program was inserted into the computer system, it could modify any object in the system. The result would be to lower the integrity level of the infected object. It is possible with this policy that, overtime; there will be no more trusted objects in the system because their integrity level has been lowered by subjects modifying them.
§ s ∈ S can modify any o ∈ O , regardless of integrity levels.
§ If a subject modifies a higher level object the transaction is recorded in an audit log.
The low-watermark integrity audit policy simply records that an improper modification has taken place. The audit log must then be examined to determine the cause of the improper modification. The drawback to this policy is that it does nothing to prevent an improper modification of an object to occur.
§ Any subject can observe any object, regardless of integrity levels.
§ Integrity Star Property: s ∈ S can modify o∈ O if and only if i(o) ≤ i(s).
§ Invocation Property: s₁ ∈ S can invoke s₂ ∈ S if and only if i(s₂ ) ≤ i(s₁).
The ring policy is not perfect; it allows improper modifications to take place. A subject can read a low level subject, and then modifies the data observed at its integrity level (Castano).
So, It is no harder to implement the strict integrity policy.
If the strict integrity property is too restricting, one of the dynamic policies could be used in its place.
Ø The model does nothing to enforce confidentiality.
Ø The Biba model does not support the granting and revocation of authorization.
Ø This model is selecting the right policy to implement.
The Bell La-Padula model is a classical model used to define access control. The model is based on a military-style classification system (Bishop). With a military model, the sole goal is to prevent information from being leaked to those who are not privileged to access the information. The Bell La-Padula was developed at the Mitre Corporation, a government funded organization, in the 1970’s (Cohen). The Bell La-Padula is an information flow security model because it prevents information to flow from a higher security level to a lower security level.
The Bell La-Padula model is based around two main rules: the simple security property and the star property. The simple security property states that a subject can read an object if the object is classification is less than or equal to the subject’s clearance level. The simple security property prevents subjects from reading more privileged data. The star property states that a subject can write to an object, if the subject’s clearance level is less than or equal to the object’s classification level. What the star property essentially does is it prevents the 2 lowering of the classification level of an object.
The properties of the Bell La-Padula model are commonly referred to as “no read up” and “no write down”, respectively. The Bell La-Padula model is not flawless. Specifically, the model does not deal with the integrity of data. It is possible for a lower level subject to write to a higher classified object. Because of these short comings, the Biba model was created. The Biba model in turn is deeply rooted in the Bell La-Padula model.
There is a slightly embellished Mealy-type automaton as our model for computer systems. That is, a system (or machine) M is composed of
§ a set S of states, with an initial state s0 2 S,
§ a set U of users (or subjects in security parlance),
§ a set C of commands (or operations), and
§ a set O of outputs,
Together with the functions next and out:
§ next: S × U × C → S
§ out: S × U × C → O
Pairs of the form (u, c) 2 U × C are called actions. We derive a function next*:
Ø Next*: S × (U × C)* → S
(The natural extension of next to sequences of actions) by the equations
Ø Next*(s, Λ) = s, and
Ø Next*(s, α ◦ (u, c)) = next (next*(s, α), u, c),
Where Λ denotes the empty string and ◦ denotes string concatenation.
Based on these two primitive types of access, four more elaborate ones can be constructed. These are known as w, r, a, and e access, respectively:
In order to model formally this internal structure of the system state we introduce
And also the functions contents and current-access-set:
(where P denotes power set) with the interpretation that contents(s, n) returns the value of object n in state s, while current-access-set(s) returns the set of all triples (u, n, x) such that subject u has access type x to object n in state s. Observe that contents captures the idea of the value state, while current-access-set embodies the protection state of the system.
Thus, we introduce functions alter, and observe:
with the definitions:
That is, observe(s) returns the set of all subject-object pairs (u, n) for which subject u has observation rights to object n in state s, while alter (s) returns the set of all pairs for which subject u has alteration rights to object n in state s.
A state s Є S satisfies the simple security property if Є N:
Ø (u, n) Є observe(s) clearance (u) ≥ classification(s, n).
A rule r is ss-property-preserving if next(s, u, r) satisfies the ss-property whenever s does.
Let T U denote the set of trusted subjects. A state s Є S satisfies the *-property if, for all un-trusted subjects u Є UT (we use to denote set difference) and objects n Є N:
Ø (u, n) Є alter(s) ⊃ classification(s, n) ⊃ current-level(s, u), and
Ø (u, n) Є observe(s) current-level(s, u) ⊃ classification(s, n).
A rule r is *-property-preserving if next(s, u, r) satisfies the *-property whenever s does.
Note that it follows from these definitions that:
Ø (u, n, a) Є current-access-set(s)current-level(s, u),
Ø (u, n, r) Є current-access-set(s) classification(s, n),
And
Ø (u, n,w) Є current-access-set(s) classification(s, n) = current-level(s, u).
Also, as a simple consequence of the transitivity of ≥, if a state s satisfies the *-property and u is an un-trusted subject with alteration rights to object n1 and observation rights to object n2 (in state s), then
Ø classification(s, n1) ≥ classification(s, n2).
The original formulation of the *- property was somewhat different than that given above in that it did not employ the notion of a subject’s current-level. The formulation of the *-property given in [1, Volume II] is, u Є TU, and m, n Є N:
Ø (u,m) Є observe(s) ^ (u, n) Є alter(s) ⊃ classification(s, n) ⊃ classification(s,m).
A state is secure if it satisfies both the simple security property and the *-property. A rule r is security-preserving if next(s, u, r) is secure whenever s is.
We say that a state s is reachable if
Ø s = next*(s0, α) for some action sequence α Є (U × C)*.
A system satisfies the simple security property if every reachable state satisfies the simple security property.
A system satisfies the *-property if every reachable state satisfies the *-property.
A system is secure if every reachable state is secure.
Bell and La Padula demonstrated the application of their security model by using the results of the previous section to establish the security of a representative class of 11 rules. These rules were chosen to model those found in the Multics system.
A subject u may call the rule get-read(n) in order to acquire read access to the object n. The rule checks that the following conditions are satisfied.
o current-level(s, u) ≥ classification(s, n)
If both these conditions are satisfied, the rule modifies the protection state by setting
§ current-access-set(s0) = current-access-set(s) {(u, n, r)},
where s0 denotes the new system state following execution of the rule. Otherwise, the system state is not modified.
The security of get-read follows directly from Corollary 9.
These are analogous to get-read.
A subject u may call the rule release-read(n) in order to release its read access right to the object n. No checks are made by the rule, which simply modifies the protection state by setting
§ current-access-set(s0) = current-access-set(s){(u, n, r)},
where s0 denotes the new system state following execution of the rule. The security of release read follows directly from Theorem 10.
These are analogous to release-read.
A subject u may call Change-Subject-Current-Security-Level(l) in order to request that its current-level be changed to l. The rule checks that the following conditions are satisfied.
§ (u, n) Є alter(s) ⊃ classification(s, n) ≥ l, and
§ (u, n) Є observe(s) ⊃ l ≥ classification(s, n).
If both these conditions are satisfied, the rule modifies the system state by settingcurrent-level (s0, u) = l, where s0 denotes the new system state following execution of the rule. Otherwise, the system state is not modified.
A subject u may call Change-Object-Security-Level(n, l) in order to request that the classification of object n be changed to l. The rule checks that the following conditions are satisfied.
o (i.e., untrusted subjects may not “downgrade” the classification of an object).
If these conditions are satisfied, the rule modifies the system state by setting classification (s0, n) = l, where s0 denotes the new system state following execution of the rule. Otherwise, the system state is not modified.
There are several limitations of BLP:
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.
Read moreEach paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.
Read moreThanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.
Read moreYour email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.
Read moreBy sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.
Read more