SCD - 1.3.3 - Networks

?

Internet, IPv4 & IPv6

The Internet is the largest WAN out there, it is a network of inter-connected networks which form the hardware part and the World Wide Web is a collection of all the resources accessed via the internet

The internet is structured into parts:

  • Main part is called the backbone which is a set of dedicated connections, connecting several large newtorks across the globe
  • Each of these points are then connected to regional networks controlled by ISPs
  • The ISP then provides access to individual end-users

Each device is given a unique identifier so data can be sent to the correct destination using either IPv4 (4 octet values - values described by 8 bits) [e.g. 14.132.250.10] or more latley as IPv4 has ran out of its 4.3 billion address limit is IPv6 with 8 sets of 4 hex digits (128bit) [e.g. 3A4C:B2F2:1256:9ABD:124E:DEFA:111A:457B]

1 of 19

Web addresses + DNS

URL's are used to specify the means of accessing a resource accross a network and its location e.g. http:// specifies the resource requires http, www.bbc.co.uk/index.html is the fully qualified domain name and the resource to be accessed, the URL is a combination of both

DNS servers are dedicated computers with an index of domain names and the IP addresses, when a computer queries a DNS server for a domnain name it returns the IP address with the computer uses (URLs are used as an IP is too difficult to remember)

There are 13 root DNS servers that work together to catalogue every domain name which are segmeneted into geographical groupings or levels, when a DNS server does not know the IP of a domain it is referred to a related domain server that may know, this keeps on going until it can deliver an IP address or returns an error message (the order of this is from the top level servers down to lower-level servers)

2 of 19

DNS continued + Networks

Domain names must be unique as not to comfuse DNS requests which is the responsibility of 5 global internet registries which work together to mnaintain a database of address assignments, a domain name can be purchased by internet registrars with top level domains (TLDs - .com, .uk, .net, .org) being the most expensive

As soon as 2+ computers are connected togther they form a network (1 device is called a standalone) which fall into 2 catagories LAN (can be slit into PAN[Personal Area Network]) or WAN (can be MAN[Metropolitan Area Network] or GAN[Global Area Network])

LAN - 2+ computers connected within a small geographical area (office) which have the advantages of not running out of IP addresses, easier to keep private than WAN, network printing, resource sharing, update issuing

3 of 19

Topologies (Bus + Star)

Topologies (arrangement of various devices which make up a network)

Bus - Nodes connected in a daisy chain by a central communication channel with all nodes connected to a single backbone and each end connected with a terminator to stop signal bouncing back, each node is passive and data is sent in one direction at a time only with one computer transmitting successfully at a time

  • Advantages - Cheap, easy to add devices, good for small networks
  • Disadvantages - backbone fails network fails, limited cable length, degrading performance due to data collisions, poor network security

Star - Cental node/hub porvides a common connection point for all other nodes, switch sends each communication to computer it is intended for

  • Advantages - Easy to isolate problems, good performance, switch is more secure as data is only sent to recipient
  • Disadvantages - Expensive to set up if a lot of cable is needed, if central device fails network fails
4 of 19

Topologies (Full + Partial Mesh)

Mesh - Full (each computer connected to every other computer) or partial (each computer is connected to at least 2 devices, no central controlling node so no single point of failure and all devces have equal importance

  • Advantages - Manage high amounts of traffic due as mutliple devices can transmit simultaneously, failure of one device does not cause a break in the nework, adding more devices does not disrupt data transmission
  • Disadvantages - Higher cost to implement, building and maintaining is difficult and time consuming, chance of redundant connections is high which increases costs and adds potential for reduced efficiency

Physical topology - How devices are connected

Logical topology - How devices communicate across the physical topologies

5 of 19

Wireless Components + Wi-FI

Wi-Fi - wireless networking technology allowing high-speed internet and network connection where devices connect via a Wirelsss Access Point (WAP) with hots spots in places such as cafe's, hotels, libraries, etc.

Wireless Components

Wirelss Network Interface Card (NIC)

  • Makes up a station (Computer + NIC)
  • Stations share a radio frequency channel
  •  WAP requires connection to router and router needs a connection to a modem (WAP + modem often built into router)

Wireless network requires a WAP – This broadcasts on a fixed frequency and all devices within range can connect

6 of 19

T2 - Internet Communication

Data was originally trasnmitted by physically connecting two endpoints which is called circuit switching which creates a communication connection. Nowadays packet switching is used

Data is broken into packets at the sending end and assembled again at the recieving end increasing network efficiency and reliability, data in these packets are chuncked and each need to travel the same distance which takes time, the delay in the time from the source to the destination is called latency

When packets are sent accross networks with multiple connections and routes to a destination packet switching is used where each packet takes the fastest route avaliable

Packets are forwarded from one network to another using routing: 

  • Each router stores data about the avaliable routes to the destination node by looking at the destination IP in its routing table to find the best router to forward the packet to
  • This information is constantly updated to ensure it has the fastest route avalaible
  • Everytime a packet transfers between routers it is called as a hop (number of which can be found out),
  • Routers continue to forward the packet until it reaches the destination node where packets are put in order again
7 of 19

Building a Packet + Protocol + Gateways

The data to sent is called the payload and the packet also contains a header  and a trailer (payloads vary in size from 500-1500 bytes)

Packets are kept small to ensure that transmitting times are short and do not proevent other packets from moving, they are not soo small as to make data transfer inefficient

Header - Contains recipient and sender IP addresses, packet number and overall number of packets in transmission to assist in reassembling the data, Time to Live (TTL) or hop limit is included which is how long until data packet will be requested again

Trailer - Contains error checking componenets (Checksum, Cyclical Redundancy Checks) to verify data has not been corrupted on transfer and to check the data by the recieving host. the same checksum is then calculated at the end, if they do not match the data is corrupted and is refused and a new copy is requested

Protocol - Set of rules/formal description of format of digital transmission that covers packet size, header contents & format, error detection, correction procedure, how checksum is calculated, trailer contnets

Protocols are needed as they must be standard across all devices in a network for communication to work, TCP/IP is the global standard operating in a stack of 4 layers

Gateways - Required when data is traveling to a network which uses different protocols, it strips the header data and reapplys it in the correct format of the new network, a router and gateway are often one integrated device

8 of 19

TCP/IP

TCP/IP - Set of rules which format a message so it can be sent over a network:

Application Layer - Provides services for application to communicate accross a network (often the internet), uses high-level protocols that set an agreed standard (e.g. SMTP, HTTP), specifies the rules of what should be sent

Transport Layer - Uses TCP to establish an end-to-end connection with the recipient computer, data is split into packets and numbed sequentically, adds port number to be used, at recieving end this layer confirms packets have been recieved and requests any missing packets

Network Layer - Uses IP to address packets with source and destination IP, router looks at IP's and from the destination IP uses routing table to instruct next hop, forwards each packet towards an endpoint (socket (combination of IP addresses and port number))

Link Layer - Operates accross a physical connection, adds MAC address of NIC packets are being sent to which changes with every hop

Once recived, link layer removes MAC address, network layer removes the IP address, transport layer removes port number and reassembles packets into correct order, application layer presents the data

9 of 19

MAC adresses, Port numbers, FTP & Mail protocols

MAC addresses uniquely identifies a physical device with a network interface card (NIC), packets move up and down the network and link layers of the stack as they hop accross routers changing their source and destination MAC addresses as they go

Port Numbers - Used to alert a specific application to deal with data sent to a computer which are used by protocols to specify what data is being sent

File Transfer Protocol - Application level protocl used to move files accross a network (usernames and passwords are commonly used to protect access to a file and identify users)

Mail servers are dedicated computers that are responsible for storing email, providing access to clients and providing services to send emails

  • SMTP - Used to send emails and forward them between mail servers to their destination
  • POP3 - Downloads an email stored on a remote server to a local client which then the server copy is deleated
  • IMAP - Manages emails on a server so multiple clients can access the same email account in synchronicity (Emails are not deleted)
10 of 19

T3 - Network Security and threats

Firewall - Protects information going in and out of a network, software or hardware, ports are opened as to only let certain traffic through (e.g. like a door to a castle)

Packet Filtering - Firewall inspects packets by looking at payload, destination & port, blocks data trying to go through the wrong port, port must remain open for duration of connection otherwise firewall rejects packet, ports are closed on default otherwise all connections could go through

Proxy Servers - Physical hardware put between client and web server so web server never sees clients information and client will never see the server which makes it look like the request came from the proxy

Proxy enables: Anonymous browsing, bypass country filters, filter undesirable content, log user data and requests, cache commonly accessed sites (will be slower if site has updated since last visit)

Encryption - Act of encoding a plaintext messsage so it cannnot be deciphered unless you have a numerical key to decrypt it therefore if message is intercepted it cannot be understood, if key is intercepted encryption process is rendered useless

11 of 19

Malicious Software & Code Quality

Malicious Software - Distrupts users/damages data with multiple types of attacks, worms and viruses self-replicate, virus embeds itself in programs and data files but needs a user to help spread it

  • Worms - Standalone program that does not require a user to run it to spread as it exploits vulnerbilities in destination system and self replicates
  • Trojan - Masks itself as innocuous or useful applications, cannot self-replicate, often serve to open a backdoor to the internet to use processing power, bandwidth and to remotly exploit data e.g. in a botnet
  • Phishing - Lure unsuspecting victims using services such as email/texts to maipulate victim to a fake website to give away personal data, clicking the link can give away bank/social media account details

Code quality needs to be high as well as the act of monitoring unauthorised access attempts and improving protection can reduce threat from malware e.g. guarding against SQL injection/buffer overflow attack, using strong passwords, 2-factor authentication, use of access rights

12 of 19

Types of attacks & Prevention

Buffer Overflow - Occours when a program accidently writes data to a location to small to handle it so overflowed data may end up in neighbouring instruction space, malware takes advantage of this to manipulate and cause an overflow which may then be read as a malicious instruction

SQL Injection - SQL commands can be entered into online databse forms to change the processing e.g. SELECT * FROM Customer WHERE CustID = 21104710; DROP TABLE Accounts;

Monitoring - Protects against threat of hacking which could introduce malware, tools such as packet sniffers and user access logs can be used to protect against these threats

Prevention - Using up-to-date patches to the OS and application programs reduces vulnerabilities as well as having up-to-date anti-malware/virus software can prevent spread of infection

13 of 19

T6 - Search Engine Indexing and PageRank

Search engines are used to find something across the World Wide Web by looking on every single server/station connected to the internet (over a trillion web pages exist), search engine each have their own efficient way of finding relevant resources so results will vary

Search Engine Indexing - Web crawlers/spiders are used to index all the pages by looking at a few then following the links on those pages until a large chunk of the internet has been indexed, as a result when you search something you are actually searching the engine's index of the web

  • Web crawlers store information such as URL, content of resource, last time it was updated, quality of the resource
  • When searching for something to decide on the resources to show the engine asks over 200 questions (google) and asks about its PageRank which then compile to get the page's score

Meta Tags are used to describe the content of a web page which are created by developers and made so only crawlers can see them with tags such as keywords and a description of the page

When searching the web the search engine looks through its index at every resource that contains those terms which are then presented to the user, the search engine must know how to calculate the relevant pages and how to list them

14 of 19

PageRank

Develped by founders of Google to list search results in order of usefulness and relevance, before this pages were ranked in the order of how many times the result appeared on the web page

  • Does not rank websites as a whole, each page is given its own PageRank
  • PageRank of A is defined by the PageRanks of those pages linked to A
  • Dampening factor is probability of random web browser reaching a page, usually set to 0.85

Importance of page is determined by number of inbound links from other pages as well as the quality (PageRank) of the inbound pages

15 of 19

PageRank Continued

There are around 200 factors that affect the PageRank of a website such as:

  • Domain name - Relevance to search term
  • Frequency of search term in web page
  • Age of web page
  • Frequency of web page updates
  • Magnitude of content updates
  • Keywords in <H1> tags

When calculating pagerank of a page that we do not know, we assume that the pagerank of the inbound pages is 1, this will then change as we iterate getting more and more accurate, the higher the value is, the more relevant the page is in theory and so it will appear higher on the list of search results

16 of 19

T7 - Client-Server and Peer-to-Peer

Client-Server Model - Client accesses data,services & files from server and initiates communication to the server and then the server waits for requests from clients

Features: Central server to manage security and holds some files, server performs some tasks, suitiable for different types of organisations, may need specialist IT staff, can be expensive to set up, no access to other users' files, backup is centralised as well as user ID's, passwords and access levels

Peer-to-Peer Model - No central server (Full or partial mesh)

Features: Suitiable for small company/home network, all computers can see files on other computers and communicate without going through a server, if a computer is switched off data cannot be retrieved from it, cheap to set up and maintain (The internet is an example of this)

Client Processing - Data is processed before it is sent to a server by the client on the web this is usually in the form of scripts so the web page does not communicate with the server. As a result only good data is transmitted increasing response times and reducing clutter on the server

JavaScript may be used on web pages to validate data before it gets sent to the server to be further validated, e.g. making sure that no box on a form is left blank

17 of 19

API & Server-Side Processing

Client processing allows for more interactivity as it reponds imeadiatly to user input, removes unnecessary processing from the server, data cannot be intercepted on its way to the server. However, not all browsers support scripts, scripts are dependent on client's processing power, each browser responds to scripts in its own way

Initial processing on the client side saves server resources and speeds up the server providing a better experience as the scripts handle the initial processing.

Application Programming Interface (API) - Set of tools used for building software applications with requests processed by the client and responded to by the relevant server (Google Maps), process is initialised and defined by the client, once API is initialised web page can define how to interact and request data

Server-Side Processing - a server needs to sometimes process information to do task such as: process user input as another layer of validation, display pages, structure web applications, interact with permanent storage/databases using SQL

Often programmed in Python, PHP, and ASP

Sometimes needed if client does not have capability to provide data required to process a request/a company may want to hold sensitive data on their server relating to the request, additionally, the way the data is processed may be a secret protected by law e.g. Google PageRank

18 of 19

Server-Side Processing cont. Client vs Server

Server-side processing may also be used to further validate data validated by the client using JavaScript which can be easily circumvented which makes this crucial to accurate, secure data being transmitted

Example Argos search: 

  • Client processing - Web page behaviour, style, form validation
  • Server processing - Item stock lookup, loading product information from a database, sending request back to the client

Client vs Server-side processing

Client - Initial validation, web page interactivity, manipulating interface elements, applying CSS, reduces server load, reduces web traffic

Server - Database queries, encoding data to readable HTML, updating database, calculations, futher validation, keeps data secure

19 of 19

Comments

No comments have yet been made

Similar Computing resources:

See all Computing resources »See all Communication and networking resources »