paper 1

HideShow resource information

relational databases

Data redundancy is an unnecessary repetition of data. This is avoided in databases because of the risk of inconsistencies between different copies of the same data.

Relational databases are usually made up of several data tablesIn relational databases avoiding data redundancy is largely achieved by data normalisation.

Data integrity - the maintainance of a state of consistency in a data store such that the data reflects the reality that it represents, is as it was intended and is fit for purpose.

Data corruption - the oposite of data integrity usually caused by technical failures of hardware, software errors or electrical glitches, operator error or malpractice

Data security is keeping data safe using built in security functions

Referential integrity is one aspect of data integrity which refers to a state of the database where inconsistent transcations are not possible.

normalisation process First level  1st NF Eliminate duplicate columns from the same table Create seperate table for each group of related tables Identify a column or combination of columns that will identify each row uniquely (primary key or composite key) Second level  2 NF Remove any data sets that occur in multiple rows and transfer them to new tables Create relationships between these new tables and earlier tables by means of foreign keys Third level  3NF examples Remove any columns that are not dependant on the primary key

 

1 of 8

relational databases continued

ACID Rules To protect the integrity of a database, transactions must conform to this set of rules Atomicity - a change in the database is either completely performed or not performed at all. The software must prevent half finished transactions being saved

Consistency - a transaction must take the whole database from one consistent state to another consistent state e.g. thecamount deducted from one back account must becthe same as the amount added to another account

Isolation - other users or processes cannot have access to the data whilst the transaction is in progress by locking until a consistent state is reached

Durability - once a change has been made to the database the change must not be lost by a subsequent system failure or operator error - ideally immediately write the transaction to secondary store transaction processing

Transaction processing is a type of processing which tries to provide a response to a user within a short time frame. It is not time critical like a real time system and normally features a limited range of operations whose proceedure can be planned in advance eg cash point withdrawl or balance enquiry. The required database functionality can be summarised by the acronym CRUD - create, read, update, delete. A transaction must not allow a database to become damaged and tye DBMS maintaining this consistent state is called data integrity

Queries are used to isolate and display a subset of the data in the database either as a printed report or screen display

DBMS is the software to create and maintain a database the database structure queries views industrial tables   Adv of DBMS interface outputs setting and maintaining access rights automating backups preserving referential integrity creating and maintaining indexes updating the database  

2 of 8

databases

A database is an organised, structured, persistent collation of data

Databases make processing more efficient, reduce storage requirements and seek to help avoid data duplication and redundancy

A database is a persistent store because its content data survives after the program has finished processing.

Databases allow data to be retrieved quickly, updated and filtered. Up to date databases generally reduce the  inconsistencies that lead to errors

A field is a single entity of data in a database A record is a single unit of information in a database and is normally made up of fields A file is a collection of records of a particular topic

A serial file is one where records are organised one after another.  To locate a particular record it is necessary to start at the beginning of the file and examine each record in turn until the required record is found or the end of the file is reachedA sequential file is an improvement on serial files particularly when the records are arranged in some index order e.g. record ID, surname Primary key – a unique identifier of a record

Transaction is a change in the state of a database e.g. addition, amendment, deletion Transaction file is a file of events that occur as part of the business of an organisation.  Its contents are to a large extent unpredictable although they are usually in chronologic orderMaster file is  principal file held by an organisation that stores some basic details about some crucial aspect of the business.

3 of 8

databases continued

Entity – a real world thing that is modelled in a database e.g. student, stock item, shop sale

Relation – a table in a relational database

Foreign key – the field to which a primary key is linked as part of a relational database

Tuple – a row in a table equivalent to a record in a database. No two tuples in a relationship can be identical A common method to count fields and records is to insert a marker, usually a comma, to separate (delineation) different fields.  This is often known as a .csv file (comma separated values).

This is flexible and does not waste as much memory space as a fixed field length structure. The software advances through the file by counting markers / groups of markers

Another method of quickly writing and reading files is called hashing.  The key field of a record is transformed in such a way as to generate a disk address that allows a random access device e.g. disk drive to go directly to a part of a disk and start working from there.

OCR – Optical character recognition e.g. bar codes, numberplate recognition, post codeOMR – Optical mark recognition e.g. lottery ticket.QR code (abbreviated from Quick Response Code) is the trademark for a type of matrix barcode (or two-dimensional barcode) first designed for the automotive industry in Japan that contains information about the item to to which it is attached.

4 of 8

sensors

  • A device designed to measure some physical quantity in its environment. An example might be heating or security alarm.
  • Once they have taken a reading or measurement, they might send that reading straight back to the computer or they may store it up and take a set of readings over time and send them back in a batch.
  • This data is called an input. The sensor itself produces the output.

Open loop system

Only looks at its input signal in order to decide what to do. It takes no account on the output.

Data such as pressure light and temperature is analogue. Most sensors take analogue measurements. This means the output changes smoothly to one valve to another. Computers only work with digital data. An interface box or ADC analogue to digital converter is needed to convert analogue data from the sensor into digital data the computer can process

All computers need digital data in order to process it further. Digital data only has two values either 0 and 1 or on and off.

5 of 8

data transmissions

A protocol is a set of rules relating to communication between devices, that governs the transmission of data. 

HTML is Hypertext transfer protocol - the standard that is used for creating webpages. It is a standard that uses text and tags to control what is displayed on the uses computer. The tags delineate text items and affect how they are displayed.

Simplex transmission is where data can only travel in one direction, for example a radio broadcast, where the radio station sends music or talk to a radio, but radios cannot send anything back to the station. It can also be thought of as a one-way street.

Half-Duplex transmission is where data can travel in both directions, however it can only travel in one direction at a time, for example a walkie-talkie, It can also be thought of as a two way street which is only wide enough for one car.

Full" Duplex is where data can travel in both directions at the same time, for example an internet connection, where data can be both uploaded and downloaded at the same time. It can also be thought of as a two way street that is wide enough for several cars to pass, travelling in both directions

6 of 8

data transmissions continued

  • error checking-When data is being transmitted, it can become corrupted. Therefore, data needs to be checked at the receiving end to ensure it is the same as the data that was transmitted before it is accepted.
  • Parity checking is where an 8-bit byte consists of 7 data bits, and 1 parity bit. There are two types of parity, odd parity and even parity. The type of parity used is agreed in advance according to the protocol. With odd parity, the total number of 1s will be odd. With even parity, the total number of 1s will be even.If the total number of 1s is odd, but even parity is used, then the parity bit will be a 1, so that the total number of 1s that are transmitted in the byte is even. Therefore, if odd parity is used, but the total number of 1s is even, then the receiving computer knows there has been an error and asks for the data to be resent. Odd parity works in exactly the same way, but the total number of 1s will be odd, and not even.
  • Echos are a good way to check small amounts of data for errors. The data is returned to the sender, and the sender confirms it matches what was sent. However, this is slow and hugely inefficient, as the data is sent twice (to the receiver and back to the sender).
  • check sums- Data is sent in blocks of several bytes. The bytes are then added up (discarding any carry), and the result is transmitted with the data. When the data arrives, the receiver also independently completes a checksum. If the two checksums match, then the data is accepted
  • bit rate- The speed at which data is transmitted serially is called the bit rate mesauredin bits per second
  • The  rate at which the signal changes is called the baud rate/symbol rate.
7 of 8

transmissions

Latency is the time delay between the moment the first byte or packet of communication starts and when it is received at its destination. Computers use either even or odd parity.  In even parity, the total number of on bits in each byte, including the parity (most significant) bit, is an even number. When data is transmitted, the parity bit is set at the transmitting end and is checked at the receiving end. If the wrong number of bits are on, this is an indication that an error has occurred

8 of 8

Comments

No comments have yet been made

Similar Computing resources:

See all Computing resources »See all Databases resources »