Major Trends Which Affect Microprocessor Information Technology Essay

In the first section I selected the question about Memory Management Unit of Linux operation system. In this section I described the strategies and mechanism used by Memory Management, problems faced by these techniques and solutions to overcome it.

In the section number two I chose the question about microprocessor. This question discussed how microprocessors work, major trends affecting to their performance, differences between microprocessors design goals for laptops, servers, desktops and embedded systems.

2

Section1: Linux Operating System

Introduction

Linux, one of the free open source operating system does sufficient memory management activities to keep the system stable and users demand for errors free. As processes and threads executes, they read instructions from memory and decode it. In such act, instructions would be fetched or store contents of a location in a memory. Then, the processor would execute the instructions which in either way the memory would be accessed in fetching instructions or storing the data.

Linux uses a copy-on-write scheme. If two or more programs are using the same block of memory, only one copy is actually in RAM, and all the programs read the same block. If one program writes to that block, then a copy is made for just that program. All other programs still share the same memory.

Linux handles memory in such a way that when RAM is not in use, the operating system uses it as disk cache. Below diagram illustrate a brief overview of Linux operating system.

C:UsersuserDesktopimages.jpg

3

Memory Management

The term memory management refers to the one of the most important parts of the operating system. It consider in provision of memory-related services to applications.

These services include virtual memory (use of a hard disk or other non-RAM storage media to provide additional program memory), protected memory (exclusive access to a region of memory by a process), and shared memory (cooperative access to a region of memory by multiple processes).

Linux memory management does use the platform of Memory Management Unit which translate physical memory addresses to liner ones used by the system and page fault interrupt are requested when the processor tries to access to memory that is not entitled to.

Virtual Memory

Virtual memory of Linux is using a disk as an extension of RAM therefore that the effective size of convenient memory grows respectively. The kernel will write the substance of a currently dormant block of memory to the hard disk so that the memory can be used for another function. When the original contents are necessary again, they are read back into memory. This is all made completely transparent to the user; programs running under Linux only see the larger amount of memory available and don’t notice that parts of them reside on the disk from time to time. Obviously, reading and writing the hard disk is slower (on the order of a thousand times slower) than using real memory, so the programs don’t run as fast. The part of the hard disk that is used as virtual memory is called the swap space.

Virtual memory system consist of all virtual addresses not physical addresses. These virtual addresses are transformed into physical addresses by the processor based on information held in a set of tables maintained by the operating system.

To make this conversion easier, virtual and physical memory are shared into handy sized pieces called pages. These pages are all the same size, if they were different size, the system would be very hard to administer

4

The schemes for Memory Management

The simplicity of Linux memory model facilitates program implementation and portability in different systems. There exist two schemes for implementation of memory management in Linux;

1.Paging

2.Swapping

Paging

Demand Paging

Currently, saving is done using physical memory by virtual pages when compiling a program. In latter case when a program runs to query a database, not all database will respond, but only those with data records to be checked. For instance a database request for search query will only be loaded and not database with programs that works to add new records. This is also referred to as demand paging.

The purpose of using demand paging is to load performing images into a process of virtual memory. Every time when a command is accomplished, the file containing it is opened and its contents are displayed into the process’s virtual memory. Memory mapping is executed by modifying the data structure which is describing this process. Even so the rest of the image is left on disk ,only the first part of the image is actually sent into physical memory. Linux uses memory map to identify parts of image to load into memory by generating page faults as the image executes.

5

C:UsersfoxDesktopmmu-vs-iommu-memory.png

Page Faults

Page fault exception are generated when a process tries to access an unknown page to memory management unit. The handler goes further in examining the currently running process`s memory information and MMU state, then determines whether the fault is good or bad. As good page faults cause the handler to give more memory to the process, the bad faults invoke the handler to terminate the process. From good page faults are expected behaviour to whenever a program allocates a dynamic memory to run a section of code, write a for the first time a section of data or increases its stack size. In such a case when a process tries to access this newly memory, page fault is declared by MMU and the system adds a fresh page of memory to the process`s table. The interrupted process is the resumed. In cases where a process attempt to access a memory that its doesn’t own or follows a NULL pointer then bad faults occur. Additionally, it could also be due to bugs in the kernel in which case the handler will print an “oops” information before terminates/killing the process.

6

Swapping

Linux separates its physical RAM (random access memory) into pieces of memory called pages. The process of Swapping is accomplished by copying a page of memory to the preconfigured space on the hard disk, known as a swap space, to exempt that page of memory. “The combined sizes of the physical memory and the swap space is the amount of virtual memory available.”

Swapping is done mainly for two reasons; One is insufficient memory required by the system when physical memory is not available. The kernel does swaps out the less used pages and supply the resources to currently running processes. Second, a significant number of the pages used by an application during its start-up phase may only be used for initialization and then never used again. The system can swap out those pages and free the memory for other applications or even for the disk cache.

Nevertheless, swapping does have a disadvantage. If Compare with memory, disks are very slow. For example, memory speeds are measured in nanoseconds, but disks are measured in milliseconds, so admittance to the physical memory can be significantly faster than accessing disk. It depends how often swapping occurs, if it happens frequently your system will be slower. “Sometimes excessive swapping or thrashing occurs where a page is swapped out and then very soon swapped in and then swapped out again and so on. In such situations the system is struggling to find free memory and keep applications running at the same time. In this case only adding more RAM will help”.

There are two forms of swap space: the swap partition and the swap file. The swap partition is a substantive section of the hard disk which is used only for swapping; other files cannot locate there. A special file in the file system which stands amongst your system and data files called a swap file.

7

Problems of virtual memory management in Linux

There are several possible problems with the page replacement algorithm in Linux , which can be listed as follows:

• The system may react badly to variable VM load or to load spikes after a period of no VM activity. Since the Kswapd, the page out daemon, only scans when the system is low on memory, the system can end up in a state where some pages have reference bits from the last 5 seconds, while other pages have reference bits from 20 minutes ago. This means that on a load spike the system have no clue which are the right pages to evict from memory, this can lead to a swapping storm, where the wrong pages are evicted and almost immediately after towards faulted back in, leading to the page out of another random page, etc.

• There is no method to prevent the possible memory deadlock. With the arrival of journaling and delay allocation file systems it is possible that the systems will need to allocate memory in order to free memory, that is, to write out data so memory can become free. It may be useful to introduce some algorithm to prevent the possible deadlock under extremely low memory situation.

Conclusion

All in all, Linux memory management seems to be effective than before and this is based on the assumption that Linux has less applications that it runs as to compared to windows machines which has more users and more applications. Beside, the system may react badly to variable VM load However, regular updates from Linux has managed to lessen the bugs.

Swapping does require more disk memory in case the physical memory is insufficient to

serve more demanding applications and if the disk space is too low the user runs the risk of waiting or kill other process for other programs to work. Additionally, resuming the swapped pages may result into corrupted data, but Linux has been in upper hand to solve such bugs.

8

Frequently Ask Questions

What is the main goal of the Memory Management?

The Memory Management Unit should be able to decide which process should locate in the main memory; should control the parts of the virtual space of a process which is non-core resident; responsible for monitoring the available main memory and for the writing processes into the swap device in order to provide more processor fit in the main memory at the same time.

What is called a page fault?

Page fault appear when the process addresses a page in the working set of the process but the process is not able to locate the page in the working set. To overcome this problem kernel should updates the working set by reading the page from the secondary device.

What is the Minimum Memory Requirement?

Linux needs at least 4MB, and then you will need to use special installation procedures until the disk swap space is installed. Linux will run comfortably in 4MB of RAM, although running GUI apps is impractically slow because they need to swap out to disk.

9

Section 2: Microprocessor

Introduction

Microprocessor incorporates all or most of the functions of Central Processor Unit (CPU) on a single integrated circuit, so in the world of personal computers, the terms microprocessor and CPU are used interchangeably. The microprocessor is the brain of any computer, whether it is a desktop machine, a server or a laptop. It processes instructions and communicates with outside devices, controlling most of the operation of the computer.

How Microprocessors Work

Microprocessor Logic

A microprocessor performs a collection of machine instructions that tell the processor what to do. A microprocessor does 3 main things based on the instructions:

Using its ALU (Arithmetic/Logic Unit), a microprocessor is able to perform mathematical operations like addition, subtraction, multiplication and division.

A microprocessor is able to move data from one memory location to another.

A microprocessor is able to make decisions and jump to a new set of instructions based on those decisions.

10

The following diagram shows how to extremely simple microprocessor capable of doing of 3 jobs.

The microprocessor contains:

An address bus – that sends an address to memory

A data bus – that can sends data to memory or receive data from memory

A RD (read) and WR (write) line – to tell the memory whether to set or get the address

A clock line – lets a clock pulse sequence the processor

A reset line – that resets the program counter to zero and restarts execution

11

Here the explanation of components and how they perform:

Registers A, B and C are kind of latches that made out of flip-flops

The address latch is just like registers A, B and C.

The program counter is a latch with the extra capacity to increment by 1 when or reset to zero it is needed.

Major trends which affect microprocessor performance and design

Increasing number of Cores:

A dual-core processor is a CPU with two processors or “execution cores” in the same integrated circuit. Each processor has its own cache and controller, which enables it to function as efficiently as a single processor. However, because the two processors are linked together, they can perform operations up to twice as fast as a single processor can. The Intel Core Duo, the AMD X2, and the dual-core PowerPC G5 are all examples of CPUs that use dual-core technologies. These CPUs each combine two processor cores on a single silicon chip. This is different than a “dual processor” configuration, in which two physically separate CPUs work together. However, some high-end machines, such as the PowerPC G5 Quad, use two separate dual-core processors together, providing up to four times the performance of a single processor.

12

Reducing size of processor

Size of the processor the one of the major trend what is affecting to the processor in last year’s time. When the processor becoming small there will be many advantages like it can include many cores to a processor, it will protect energy, it will increase its speed also.

45nm Processor Technology Intel has introduced 45nm Technology in Intel Core 2 and Intel Core i7 Processor

Family. Intel 45nm High-K Silicon Processors contain Larger L2 Cache than

65nm Processors.

32nm ProcessorTechnology At research level Intel have introduced 32nm processor (Code Name Nehalem- based Westmere) which will be released in 2nd quarter of 2009

Energy saving

Energy is one of the most important resources in the world. Therefore we must save and protect it for future purpose. The power consumption in microprocessor would be one of the major trends. For instance, Intel Core 2 family of processors are very efficient processor, they have very intelligent power management features, such аs, ability to deactivate unused cores; it still draws up to 24 watts in idle mode.

13

High speed cache and buses

In Past year Microprocessor Manufactures like Intel has introduced new cache technologies to their processors which can gain more efficiency improvements and reduce latency. Intel Advanced Smart Cache technology is a multicore cache that reduce latency to frequency used data in modern processor the cache size is increased up to 12MB

installing a heat sink and microprocessor

14

Differences between Microprocessors

Servers

Originally the microprocessor for server should give uninterrupted time and stability with low power consumption and less resources allocating processor for System Cache. That’s why most of the time they use Unix and Linux as the Server based operating systems, because they take less amount of hardware resources and use effectively so the heat which dispatches from the processor is less and the heating would be less.

Desktop Processors

The desktop microprocessors are a bit different from server microprocessors, because they are not very much concerned of power consumption or use less resources of Operation system. The goal of Desktop microprocessors is to deliver as much performance as possible while keeping the cost of the processor low and power consumption within reasonable limits. Another important fact is out there, it is most of the programs which are being used in desktop machines are designed to do long time processor scheduling jobs like rendering a high definition image, or compiling a source file. So the processors are also designed to adopt those kinds of processing.

Laptop Processor

The CPU produces a lot of heats, in the desktop computers there are a systems of fans, heat sinks, channels and radiators that are uses to cool off the computer. Since laptop has small size, and far less room for any cooling methods, the CPU usually:

Runs at a lower voltage and clock speed (reduces heat output and power consumption but slows the processor down)

Has a sleep or slow-down mode (when the computer is not in use or when the processor does not need to run as quickly the operation system reduces the CPU speed)

15

Embedded Microprocessors

Most of the embedded devices using Microcontrollers instead of separate Microprocessors; they are an implementation of whole computer inside a small thumb size chip called Microcontroller. These microcontrollers are varying its performance due to battery consumption and Instruction length issues. Most of them are designed using RISC architecture to minimize the complexity and the number of instructions per processor. Embedded device processors have high speed potential but the problem they are having is high power consumption and heating.

Conclusion

Current technology allows for one processor socket to provide access to one logical core. But this approach is expected to change, enabling one processor socket to provide access to two, four, or more processor cores. Future processors will be designed to allow multiple processor cores to be contained inside a single processor module.

16

Frequently Ask Questions:

1. How does the operating system share the cpu in a multitasking system?

There are two basic ways of establishing a multitasking environment; times lice and priority based.

In a a times lice multitasking environment each application is given a set amount of time (250 milliseconds, 100 milliseconds, etc) to run then the scheduler turns over execution to some other process. In such an environment each READY application takes turns, allowing them to effectively share the CPU.

In a priority based environment each application is assigned a priority and the process with the highest priority will be allowed to execute as long as it is “ready”, meaning that it will run until it needs to wait for some kind of resource such as operator input, disk access or communication. Once a higher priority process is no longer “ready”, the next higher process will begin execution until it is no longer “ready” or until the higher priority process takes the processor back.

Most real-time operating systems in use today tend to be some kind of combination of the two.

2.What is a multi-core?

Two or more independent core combined into a single package composed of a single integrated circuit is known as a multi-core processor.

3. What is the difference between a processor and a microprocessor?

generally, processor would be “the part of a computer that interprets (and executes) instructions A microprocessor, is a CPU that is in just one IC (chip). For example, the CPU in a PC is in a chip so it can also be referred to as microprocessor. It has come to be called a microprocessor, because in the older days processors would normally be implemented in many ICs, so it was considered quite a feat to include the whole CPU in one chip that they called it a “Microprocessor”

17

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our Guarantees

Money-back Guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism Guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision Policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy Policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation Guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more