Friday, October 26, 2012

Multiprogramming in Embedded Software

SA,..
I know this is pretty long topic, and discussed heavily while writing multi-threaded applications for desktop, but I have found few books that tackle that subject in Embedded Software. I will also review some basic operating system concepts, that can be found in any real time operating system book.

Embedded Systems Architecture

The minimum software architecture to construct a simple embedded system or even fifa2012 is the following code:

for games, the game ends when the user quits the game by himself
while(!exit)
{
render_graphics();
updateAI();
updatePhysics();
}
The same for an Embdded System, except that the system will exit or shutdown when the user switches off the power.
while(1)
{
readAdc();
calculateWeight();
sendtoPC();
}
This is AKA endless loop, and in games it's called Game Loop.
For Embedded Systems, that basic architecture is simple, efficient (no need for timers or other uC resources). On the other hand, if your application requires to read an ADC data precisely for example 2ms, that architecture won't provide you with the flexibility or the accuracy for that task.
Another issue, that the uC will be busy all the time and it will use it's full power, and that have a dramatic impact on the power consumption, especially if you run your system using batteries...

Basically, you need a scheduler to solve the previous issues.    

Basic  OS Concepts

I remember when I got my hands on windows 98, and I finally managed to play, and was surprised by the computer tech-guy while explaining to me how nice was windows98 that it can play Fifa98, and at the same time you can listen to win-amp. It was interesting feature of the windows series, as the previous operating system, windows 3.11,  it didn't support multi-tasking. 

Scheduling is the method that let processes control the cpu time or its working power. By letting the CPU to switch among the processes like a game or windows media player, it can let the computer be more productive. In a comptuer which has only a single CPU, it can only run one process at a time, like a weight a scale, it only can read the weight from the load cell,..etc. The idea of multi-tasking or multi-threaded applications is to have some process running at all times, in order to utilize the full speed of the CPU at max. The basic idea to achieve that, is to execute the process until it waits for a completion of an I/O request. In a simple system like a small embedded system, the CPU just sits idle, and all that time waiting for an I/O Completion is wasted. Most OSs, several processes are kept in memory at one time, when a process in a wait condition, the OS let the CPU to switch to another process. That previous selection process is a property of a CPU Scheduler.

There are two different types of CPU Scheduling 
1. Non-Preemptive Scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating (ESC) or by switching to the waiting state (ALT-TAB ;) ). This scheduling method was used by windows 3.1. It provides a single tasking system architecture. 

2 Preemptive Scheduling,  scheduling is prioritized. The highest priority process should always be the process that is currently utilized by the CPU, Windows 95,98..etc works by that technique. it provides a Multitasking system Architecture.

There are different scheduling algorithms like round-robin, you can read more about them at any OS Book.

Next blog isA, will discuss Critical Sections, and issues in multi-tasking, and form a design pattern for that.

Wednesday, October 24, 2012

Design Patterns for Embedded Software (1)



SA..

Introduction

Design patterns is a group of  reusable solutions for problems that appear in software design. Those solutions commonly appear while writing high level object-oriented software for a desktop. A great book that is usually called gang of four book is Design Patterns: Elements of Reusable Object-Oriented Software, it explains design patterns in details .

Commonly, It is very rare to work with objects oriented programming while writing a firmware, however there are c++ compilers available like IAR C/C++ Compiler. I have searched also for resources for commonly design patterns for embedded software and I only found one book that explains design patterns for embedded software, and he tried to mix Gang of Four Patterns, like observer, strategy pattern,..etc and tried to give them the taste for firmware. The book is Design Patterns for Embedded Systems in C: An Embedded Software Engineering Toolkit . The book is really bad and I didn't like it at all, considering using UML for C Code is awful.

1. Polled Input Pattern

The first pattern that I would like to discuss is the Polled Input. Sometimes when you try to read an input from a switch for example, you try to poll switch for reading by a loop like that while(switchInput!=0);   This means the microcontroller will stuck here waiting for the switch to be at ground. This kind of reading an input, is okey for not real time or time triggered embedded systems, you would better utilize an ISR(Interrupt Service Routine) that keeps checking the sensor falling or rising edge, but as you know it will let the CPU to stop it's current task and then completes the ISR. This kind of  Context Switching is an overhead, and rises serious complications in embedded systems. 

The Pattern that solves that should have the following properties:
1. In a real time operating system, or any scheduler  a period task should poll the input for the occurence of the event
2.The period of the  task should Ttask should be <= min Tevent.
Suppose you would like to poll a push button as shown in the following figure with a pull up resistor 10k and VCC. A common problem for push-button or any switch is the residual frequencies due to mechanics, and you need to filter and "debounce" them. Of course you can use the electronics way of filtering the noise, and that's called a hardware filtering and of course the easier way is use Software for filtering those spikes.
Following figure, shows the spikes from pushing on/off a switch.

A simple code that shows the pattern idea



Note that the previous update function should be run in a task scheduler or timer  for example every 50ms to 500ms. Note it also solves the debouching issue with switches.

Saturday, October 6, 2012

In the beginning there was a pixel..

Quoting from the bible JN1-1, it all started with plotting a pixel on the screen. If have you managed to draw a pixel on the screen, then you can write Fifa2012 on your microcontroller, of course on a high specs one :).
By drawing a pixel on the screen, you can use bresenham's line algorithm to draw a line, then you can draw a triangle, then a polygon that is simplified of many triangles, texture, shade, and light it. A great outdated, but still useful resource for all these algorithms is that  Book : Computer Graphics, Principles and Practice, Foley, Vandam.

One of the first game consoles that tried to 3D Software Rasterization is the 3DO Console. It has an ARM6 32bit, clocked by 12.5MHz!!. For Embedded guys, I'm sure you will be surprised about how the low were the specs, and how beautiful games were made on that console and now we a smaller, lower power, faster microcontrollers like PIC32.. 3DO Specs
Anyway,
How can we draw a frame of NTSC Signal ?

The pseudo-code for the algorithm is simple:

1. Draw 8 scan lines  // top over scan
2. Draw 240 scan lines // the active video
3. Draw 10 scan lines // bottom over scan
4. Generate VSync pulse // send the sync pulse to redraw
5. Delay 6 scan lines 

A nice atmel application note for AVR for generating NTSC signals can be found here:
http://www.atmel.com/Images/mega163_3_04.pdf
 

Sunday, September 30, 2012

The Dark Age of Video Games Machines

I remember playing the Sega 16bit game console Mortal Kombat, and I was amazed by how different characters animate on the screen and how on earth my joystick controls some pictures which are just some dead bitmaps on the screen. It all happens by "playing' with the TV or the monitor's electron gun on the phosphoric screen.




                             Mortal Kombat on 8 bit devices and Mortal Kombat on an XBox
Now we have a dedicated GPU (Graphics Processing Unit), and they are programmable, you can write some programs called shaders for them, you can perform complex computations, and simulating complex systems. Note that a GPU is simple a video generator : ). We also now have special APIs that access them easily like OpenGL, DirectX,..etc.

How a GPU Generates Signals?! 

Current GPUs are capable of generating advanced video formats like VGA, HDMI,..etc.
In early video game consoles or VHS or whatever any video device, just use a basic concept that the image is painted on the screen line by line. Referring to the next figure, the video devices basically paints the image from top left of the screen and moving horizontally to the right edge, then moves again to the next line which is shown in grey, then again does the same zig-zag motion, until the screen has been scanned. This process is repeated and it is known as Refresh Rate Hz, which the number of times the screen has been scanned in seconds, I'm sure you have seen that term frequently :).  On the other hand, a Frame is constructed from a complete scanning process. There are different scanning techniques like Progressive Scanning and interlaced scanning  You can read more about both here : http://en.wikipedia.org/wiki/Progressive_scan , http://en.wikipedia.org/wiki/Interlaced_video




A lot of video standards have been formalized, but they are all have the same mechanism, NTSC which is commonly used in US, has a 29.79 Frames per second, and the number of lines that are scanned per frame 527.  Asia, Europe, 25 FPS, and 625 Lines. SECAM for France and others, 25 FPS and 625.

Those mentioned standards encode the video signal into something called "composite signal".  A Composite holds  luminance information which holds the black/white image,and  synchronization information. Horizontal and Vertical sync signals.  Referring to the previous figure, You can see a complete description of  composite signal and the timing diagram of an NTSC video standard.  Basically, a horizontal line consists of  
1. A Horizontal Synchronization Pulse, that is used to identify that the line has been scanned horizontally and needs to be reset to the next position for the next line, usually it is called HSync, you can find that term mainly used in Video Generators ICs found in TVs. 
2. Back Porch, that creates a dark frame around the image.
3. Front Porch, that produces the right edge of the image.
4. The black and white information (Luminance) which varies in a voltage range 0 to 1V, where 0.3V represents black, and 1.0V is white, between them is the gray intensities. 
From the figure, one line lasts for 63.5us, at first a 4.7uS sync pulse is sent, so that it tells the TV or the monitor that a new line is going to be drawn, and that is by giving a voltage of 0V. A 52.6uS contains the video data. 

This is only for B/W Video Signal, the color information is transmitted separately, modulated on a higher frequency carrier (  Do you remember Frequency Modulation ?,  Read Openheim or  lathi's book).
A Vertical Syn VSync pluse is also sent to tell that a complete frame or an image has been drawn on the screen and the gun needs to be reset again to line number zero . 

**For Fun
 I remember when I first repaired our home TV old National 1980, and it had  just a horizontal white bar on the screen, and this is for sure means that the vertical deflection circuit in the TV has been broken, and the TV can't try to scan the screen vertically. I changed the Vertical Deflection IC and it worked. Kindly note also the same apperance can happen when the Horizontal Deflection circuits get defected, a Vertical white Bar on the screen appears and that is really bad for a TV, because that always mean that HOT( Horizontal Output Transistor ) and the Line Transformer is defected and this is really an expensive transformer and hardly to be replaced.

How can we generate a composite signal ?

Using any microcontroller, you can use any method that  Digital to Analog Converter D/A to generate a composite signal. Things like PWM, R/2R, a dedicated D/A are enough for that purpose.
All you need is analog voltages that varies from 0 to 1V, but with very precious timing, like you have seen before in the timing diagram, you need to achieve delays in microseconds, which is hard to achieve using C Language, as you can barely know the cycle of the instructions and hence it is better to use assembly language for that purpose. 

A Simple R/2R like the one  shown can be used to generate the required voltages. Note that the input impedance of most TVs is 75ohm, by using voltage divider formulas, you can easily  get the three levels that are needed for generating a composite black/white signals. Make a truth table of 2 bits and check the output. 
This is of course a 2Bit B/W signal, Imagine how a Geforce GTX580 256bit and generative a VGA signal which is more complex than NTSC/PAL Signals..






Sunday, September 9, 2012

Introduction to Embedded Software and Video Game Console Design

SA,

In my childhood, I loved to play games on Atari, Sega,..etc, actually I was addicted. Once My Sega has been destroyed and couldn't work anymore, I tried to open it to see what's inside that box that could make those games. I found bunch of resistors, ICs,..etc. How those silicon chips made those games, I'm wondering.  I was also eager to know how those " dead pictures" can become alive.

A video game console is a product that many sciences involve in, from Electronics Engineering who assemble the circuits, to Game Programmers, and Audio Engineers...etc.

I had always the interest to explain to myself how those black boxes work. I will write few blogs in the next few weeks on how to generate a video/audio signals to plot a pixel, then draw lines, triangles, polygons, which can be used to write a small game!.

There are a lot of constraints while using 8/16bit uCs for a console like Sega, that shows the engineers at that age was Amazing, they knew a lot of tricks and optimization, many of them can be found at Michael Abrash's Book on Zen of Graphics Programming. He has also participate on Microsoft's XBox GPU.

Knowing how video game consoles work, will let you know how your GPU also work, writing a software renderer will let you know  how current APIs like DirectX or OpenGL has been written.

I will stick with PIC24 from microchip, it is a fast uC, and in the next blog, I will explain the basic circuitry that is needed for that project.