Relational database
- consists of fact tables
- tables consists of records
- records are prevented from assuming values which is called consistency constraints. example: constraint in accounts database on balance field not be negative
- Database transaction
- sequence of read/write operations on database to achieve function of an application
- transforms a database from 1 consistent state to another consistent state. example: database had 14 records -> add 1 record -> database has 15 records
- transactions are done in a interleaved/concurrent manner
- improves throughput
- goal is to maximize the transactions
- transactions are scheduled in a sequence to ensure database consistency
- at times due to concurrent transaction execution leads to impacting integrity of database. some examples of impacts
- lost updates – 2 transactions updating same data item at same time leading to possible loss of say second update
- dirty reads – first transaction reads data that is still being updated by second transaction which eventually rollsback
- non repeatable reads – a transaction reads the same field twice but gets different data since another transaction in between has updated the field
- phantom reads – transaction gets different results of the same query due to another transaction doing commits in between
- there are concurrency control protocols which ensure integrity of database is not violated
- at times due to concurrent transaction execution leads to impacting integrity of database. some examples of impacts
Concurrency Control protocol
- Integrity of the data is maintained by requiring the transactions to satisfy the ACID properties
- Atomic
- either all or none of transaction are performed
- all sub transactions of a transaction are considered 1
- how is it maintained
- in case a transaction fails then it rollbacks to ensure atomicity
- however in case of real time embedded system it could lead to missing of deadlines
- cacaded rollback
- a transaction read a value which was produced by another aborted transaction
- cascaded abort happens when 1 transaction abort leading to all dependent transactions to abort
- in such case transaction needs to restart hence leading to missing deadline in case of RTES
- Consistency
- transactions maintains integrity constraints -> data is not currupted
- Isolation
- concurrent transaction do not intefere in each others computation
- how it is maintained
- achieved by locking or rollback protocol
- locking
- when reading/writing a lock on shared resource is taken
- Durability
- once committed the change is persisted
- how is it maintained
- once transaction is committed it cannot be aborted
- in case a transaction fails then it rollbacks to ensure atomicity
Realtime Database
- database used for real time application is a real time database
- uses
- many real time application need to store data and process
- controlling system needs to maintain its data in upto date state
- store large volume of data for retrieval and manipulation of data
- many real time application need to store data and process
- example applications
- Process control application
- attached to sensors and actuators
- control decisions are made based on input data and controller configuration
- actuator response time is few milliseconds so database transactions have to be completed within few milliseconds
- internet service management
- SMS uses RTDB for performing authorization, authentication and accounting for internet users
- spacecraft control application
- recieves command and control information from ground computers
- maintains contact with ground using antenna, receivers and transmitters
- has redundant hardware and software components increasing volume of data
- network management system
- stores data of network topology, switches create data about network traffice and faults
- Process control application
| conventional database | real time database | |
| temporal characteristics – data whose validity is lost after a time internal | Data updates are not time-critical and do not expire | Data represent real-world objects; value is perishable as it ages e.g.: stock market application – once new price quotation comes then previous quotation becomes obsolete |
| timing constraints – transaction execution times are predictable | No hard deadlines for queries or updates; late results are acceptable | Strict deadlines for queries and updates; missed deadlines can reduce usefulness or be considered system failure |
| performance constraints – transaction response time (RT) | metric is – number of transactions completed per unit used to optimize average response time of application | metric is number of transactions missing the deadline per unit time |
| queries with deadline | Queries may take variable time; no explicit deadlines | Queries and updates are often subject to explicit deadlines to ensure the result is timely and relevant |
| absolute data consistency | Transactions ensure data integrity and serializability at any time | Data must match the current state of the environment within a prescribed age limit (absolute temporal consistency); timely updates are critical e.g.: reading from database temp then it should be close to current temp |
| relative data consistency | Data consistency guaranteed globally across all objects | Important that data objects in a set do not differ too much in age between each other (relative temporal consistency) e.g.: taken reading of temp and pressure at 10. then reading from database both of them should be close to the reading taken at 10 |
Real time database application design issues
- more complex vs conventional application
- each database transaction have extensive data requirements
- if data is stored in secondary storage then can lead to missing deadlines
- rollback can have cascading effect introducing unpredictable delays
- tough to predict transaction response time due to protocols like concurrency control protocols
How to solve these issues with real time database
- use as in-memory database to solve above issues
- use main memory to store entire database
- use disk only for backup and logs
- works when RT DB is small
- problems associated with disk storage are avoided
- use main memory to store entire database
- set of transactions are simple and known before hand so effective resource usage can be made to get deterministic transaction executions
Temporal characteristics of real time data
- The main differences between a conventional database and the real-time database are :
- the temporal characteristics of the stored data
- timing constraints those are imposed on the database operations.
- performance metrics that are meaningful to the transactions of these databases very different
- example: anti missile system
- controller must give actual state of enviroinment (where missile is)
- this is called temporal consistency
Temporal data
- values of set of parameters of environment are recorded again and again
- temporal attributes must be stored
- archival data i.e. desirable environment state
- transaction may use both temporal data and archival data
- rocket – current path data compared to desired path data to know extent of error
- temporal consistency
- actual state of environment vs database state should be very close
- 2 main requirements
- absolute consistency
- consistency between environment and its reflection in database
- relative consistency
- consistency among data that are used to derive new data – e.g.: environmental parameters like AQI derived from temp, humidity, pm2.5 at a particular point in time
- only contemporary data items are used to derive new data
- absolute consistency
- how to represent
- D: (value, avi and time stamp)
- value
- avi – absolute validity time interval so representing time interval from timestamp the value is considered valid
- timestamp – time when measurement of D took place
- example: d = (120, 5ms, 100ms)
- rvi is used for relative validatity interval
- condition for absolute validity
- current time – d(timestamp) <= avi
- condition for relative validity
- R of data items is relative consistency if
- for all d , for all d’ in R
- d(timestamp) – d'(timestamp) <= Rvi
- see example below
- D: (value, avi and time stamp)


in above examples all entries are absolutely valid but Vel/Acc are not relatively valid while Pos/Vel and Pos/Acc are relatively valid
Concurrency control in RT databases
- transaction involves accessing several data items
- each access to data items takes time especially if disk access is involved
- to improve throughput
- execute transaction as soon as ready
- rather than executing one after other
- so transactions can either be interleaved or concurrent
- concurrent transactions are to be controlled to maintain ACID properties
- concurrent transaction should be serializable (effect on data concurrent transactions produce is same as effect on data when transactions are serialially executed)
- 2 main categories of concurrency protocol
- pessimistic
- disallow certain types of transaction from progressing
- so transaction needs to take perimission before performing operation on database
- locking scheme are used to give permission
- optimistic
- all transactions progress without any restrictions and then prune some transactions
- so transactions dont need to take perimissions
- pessimistic
Concurrency control protocols
- 2 phase locking (PL)
- pessimistic protocol restricts degree of concurrency
- 2 phases
- growing
- locks are acquiried by transaction
- shrinking
- locks are released by transaction
- once lock is released its shrinking phase starts and it cannot take further locks
- growing
- limitations for real time applicatoins
- possibility of having priority inversion
- low priority transaction takes a lock on data on which high priority transaction needs to wait
- long blocking delays
- transaction has long blocking delays
- lack of consideration of timing information
- deadlock – see example below
- T1 (high priority) -> lock d1, lock d2
- T2 (low priority) -> lock d2, lock d1
- T2 starts and takes d2
- T1 since higher priority prempts T2 and starts
- T1 takes d1 and then waits for d2
- T2 takes d2 and waits for T1
- possibility of having priority inversion
- Strict 2PL
- pessimistic protocol restricts degree of concurrency
- implemented in most commercial databases
- on top of 2PL it adds further restrictions
- a transaction cannot release any lock until after terminates or commits
- 2PL WP (Wait promote)
- pessimistic protocol restricts degree of concurrency
- deploys priority inheritance scheme where
- if a lower priority transaction (T1) is holding a lock and higher priority transaction (T2) wants the lock
- then T2 waits and T1 acquires the higher priority of T2
- This protocol becomes a standard 2PL protocol if all transactions have the same priority
- 2PL HP (high priority)
- pessimistic protocol restricts degree of concurrency
- when high priority transaction requests lock held by low priority transaction the low priority transaction is aborted releasing the lock
- Pros: chances of deadline miss by high priority transaction are lower compared to 2PL
- Cons:iwhen a lower priority transaction is aborted then work is wasted
- Priority Ceiling Protocol
- pessimistic protocol restricts degree of concurrency
- There is no priority inheritance
- All transactions are given a priority
- This protocol gives 3 values to every data object
- Read ceiling
- priority value of highest priority transaction that may write to data object
- Absolute ceiling
- priority value of highest priority transaction that may read/write to data object
- Read write ceiling
- value is defined defined dynamically during runtime
- when a transaction writes to a data object the read write ceiling is set equal to the absolute ceiling
- when a transaction reads a data object the read write ceiling is set equal to read ceiling
- Read ceiling
- Rule:
- A transaction requesting access to a data object is granted acess if the priority of the transaction requesting the data object is higher than the read write ceiling of all the objects
- After a transaction writes a data item no other transaction is permitted to either read or write to that data item until the original writer is terminates
- Optimistic Concurrency Protocols
- they donot prevent any transaction from accessing any data
- transactions are validated at the time of commit and they are aborted in case of conflict
- under heavy load conditions there may be large number of transactions are going to fail validation and aborted
- this leads to reduced throughput and higher deadline misses
- performance of OCC protocols there will be drop in case of heavy load condition
- types of OCC
- forward OCC
- transactions read/write freely by making copy of data
- at time of commit transaction needs to pass validation test
- checks if there is a conflict between validating transaction and transactions committed
- serialization order of OCC is in order of transaction commit
- OCC Broadcast commit
- when a transaction ready to commit it notifies its intention to all other running transaction (ignores already committed transactions)
- each transation carries out a check of validty and in case of conflict then committing transaction aborts and restarts
- OCC BC performs better than OCC Forward
- it does not consider priority of transactions
- OCC sacrifice
- explicitly considering priority of transactions
- follows concept of conflict set
- conflict set is set of transactions that are conflicting with validating transaction
- if any transaction in conflict set is higher priority to validating transaction then validating transaction is aborted and restarted
- if validating transaction has higher priority then conflict set is aborted and restarted
- transaction may be sacrificed for another transaction that may be sacrificed later leading to wasted computations
- forward OCC
- Speculative Concurrency control
- Speculative concurrency control (SCC) extends the ideas of optimistic concurrency control (OCC) but adds redundant, parallel “what‑if” executions to improve deadline satisfaction in real‑time databases
- Whenever a conflict is detected a new version of the conflicting transactions is initiated called shadow version.
- Primary version it executes just like as any transaction would execute under an OCC protocol
- Shadow version will execute as any transaction would do under a pessimistic protocol subject to locking and restarts
- When the primary version commits any shadow associated with it will be discarded
- When the primary version aborts, the shadow starts off executing under OCC as new primary
- Note: In SCC protocol all the updates made by a transaction are made on local copies
| Locking Based | Optimistic Control |
| reduce the degree of concurrent execution of transactions, as they use serializable schedules | attempt to increase parallelism to its maximum but they prune some of the transactions to satisfy serializability |
| 2PL HP a transaction could be restarted by, or wait for another transaction that will be aborted later leading to performance degradation | in broadcast commit schemes only validating transactions can cause restart of other transactions |
| blocking and can lead to deadlocks | non blocking and free from deadlocks |
| the pessimistic control protocols they perform better as the load becomes higher | At low conflicts, OCC outperforms the pessimistic protocols |