CSE Vision – Part 4 – Time Table Integration

Sawubona!

There has been no progress on CSE Vision for quite some time now but recently I have had a bit of time to work on it. I decided to work on trying to integrate the time table into the system. This would give everyone a good idea of where lecturers are in our department of CSE (Computer Systems Engineering) at TUT (Tshwane University of Technology) during the course of the day.

Our department is given a time table every semester in the form of an Excel workbook. It consists of every subject, lecturer, venue etc. in the FoICT (Faculty of Information and Communication Technology). I had to read all this data in from the workbook, sort through it and get any data about lecturers in our department from it. Most notably what time their classes start and end for all their subjects, and what venue they will be in during that time.

Once I had successfully managed to achieve that, I had to construct a database to store that information. Now that that is done, I can get to the “fun” bit!

Now every time a lecturer’s class starts, their availability status is automatically changed to “in class”. Yes, that is a new availability status. I decided that “in class” needed it’s own status because the “busy” status could also apply to when a lecturer is in their office and not just when they are physically in a class.

Here’s what the new status looks like at the kiosk in the lobby:

Lecturer in class

Lecturer In Class

Once the class passes its end time, the lecturer’s status reverts back to whatever it was before then.

I have also introduced yet another status. One for when I have no data about a lecturer’s status. i.e. They have not said yet what state they are in. To depict this they get a “beautiful” grey coloured card:

inactive

Lecturer Inactive

The system works well at the moment – I just need to cater for events such as public holidays and University holidays where the automatic updates should of course not occur.

This all needs to be done from the admin side which is still some way away from being completed!

FacebookTwitterGoogle+Share

One-Dimensional Localization

Guten Tag!

Localization is something that a robot does in order to try and find where it is in a given area.

The Processing sketch below illustrates a localization method for a one-dimensional grid. Before seeing the sketch in action, you need to know a few things about it. The sketch tries to determine probabilistically where the robot is in the world. The world consists of a linear cyclic world containing coloured cells that can either be red or green and the robot can “see” what colour it is standing on by using its on-board camera. The robot has the ability to move to the left and to the right. After moving and sensing a few times the robot should have an idea based on probability of where it is in the world.

Right that’s it! Below are some usage instructions with the sketch below that. Try it out.

Instructions for use:

The world begins with the robot positioned in the first cell, all the cells are coloured red and the robot is in an initial state of maximum confusion (it’s totally lost).

The user can change the colour of the cells by clicking on them. Left click for red and right click for green.

There are then four buttons in the sketch. Two of them issue commands to move the robot (the yellow circle) either one cell to the left or one cell to the right. The other two buttons simulate the feedback from the colour sensor on the robot. One of the two simulates the robot sensing red and the other simulates the robot sensing green.

This of course allows you to simulate receiving false information from a robot’s sensors (robot can be on a green square when the user clicks “Sense Red”).

All these commands then affect the probability of the robot being at a certain point in the world. These are the numbers shown beneath the coloured cells. The higher the probability of a cell, the bigger the belief that the robot is currently at that specific cell in the world.

Right at the bottom of the sketch there are numbered buttons that allow you to change the number of cells in the world. Each time the number of cells changes the world resets (robot is lost again).

CLICK ME TO TRY IT OUT!

So, after moving and sensing a few times, the robot has a fairly good idea of where it is in the world!
It achieves this by implementing a surprisingly simple algorithm. We will look at this algorithm in more detail below.

What does the robot need in order to be able to do this? For starters, it needs a map of the world or area that it is in. It would not be able to localize without this on-board map.
It also needs to of course be able to move to the left and to the right and be capable of sensing the colour that it is standing on.
Furthermore it needs to store the probability of where it is on the map it has. Each cell in the world needs to be allocated a value that represents the probability that the robot is occupying that cell. Due to the fact that the robot starts out not knowing where it is, every cell’s initial probability should be the same value as every other cell’s value. This makes sense if you think about it this way: The robot has no idea where it is so therefore there is an equal chance that the robot could be on any cell in the world.
These probabilities should be normalized (all the probabilities added together should add up to 1) to make them easier to work with and easier to visualise.

Below is an example of what this would look like in a 5 cell world:

And here in a 7 cell world:


Now what happens to these probabilities when a colour is sensed?

The cells with colours that match the the colour being sensed have their probability values multiplied by 0.6.

The cells with colours that do not match the colour being sensed have their probability values multiplied by 0.2.

This results in an increased probability of being on the cells whose colours match what was sensed in relation to cells with colours that do not match what was sensed. This makes sense! If you sense red, there is a higher probability that you are on a red cell than on a green cell!

Why not just maximise the probabilities for cells matching the colour sensed and minimise the values for cells with non-matching colours though?

This is because we are working with (or planning to work with) real-life hardware. Unfortunately in the real world, things don’t always go to plan (shocker I know). Especially when working with electronic hardware, things can and do go wrong. The colour sensor may malfunction for example, reading the colour red instead of green or vice-versa and if the multiplication values are too extreme, a single bad reading could completely throw your robot off and it could end up with irreversibly incorrect probabilities! Your robot will be doomed to suffer eternal “lostness”!

“Fun fact” – lostness is not a real word!

Now, what happens to the probabilities when the robot is moved? As you have likely seen from moving the robot around in the sketch, the probabilities move with the robot. i.e. If the robot moves one cell right, all probabilities move one cell to the right. This is known as exact motion. It is assuming that if a move command is issued to the robot that the robot will successfully execute the command (can you see where I am going with this?). Once more we need to think about what would happen in a real life scenario with a physical robot interacting with the world. The robot could try to move and have the wheels slip and as a result not move as far as it planned to. The robot could have an issue with it’s wheel encoders and end up overshooting it’s target. These are things that our code should cater for. Enter – inexact localization.

The sketch below implements inexact localization. Try it out and see how different it is.

CLICK ME TO TRY IT OUT

So, what is happening here exactly? The program is now catering for overshoot and undershoot. It does this by estimating a likelihood that the robot will overshoot and undershoot and then allocating percentages to those likelihoods.

It allocates 10% to overshoot, 10% to undershoot and the remaining 80% is how sure the system is that when a command is issued the robot will reach the intended destination.

So what does this mean for our probabilities?

For each and every probability, 80% of its value is sent to the target cell’s probability. 10% of its value is sent to the following cell in the direction of intended movement (overshoot) and 10% stays where it is (undershoot).

The picture below depicts what the process is like for attempting to move one cell to the right:


The values in the bottom cells are then simply summed up and those values are the new probability values for every cell. This results in the system being able to deal with erroneous data from the sensor every now and then. If the sensors are completely broken then of course it does not matter how fantastic your code is, the system will still fail to localize.

This is such a simple and powerful algorithm that allows your robot to do something amazing. This is only the beginning of localization though (one-dimensional). Think about how something like the Google self-driving car localizes. Believe it or not it does essentially the same thing that we are doing here – just with a far bigger budget 😉

I’m going to leave you with a few things to try out in the sketches. See if you can understand what happens when you do the actions and try to state why you think it happens.

Things to try:

Try issuing various move-sense commands and observe how the probabilities change.

Try doing an initial sense and then moving multiple times without sensing. What happens to the probabilities when you keep on moving without sensing? Why do you think this happens?

What happens when the robot stands on one spot and keeps sensing a single colour?

Change the number of cells in the world. How does that affect things?

 

FacebookTwitterGoogle+Share

Proportional Control Example

Bonjourno!

When learning about different control strategies it often helps to see the strategy in action.
I therefore made a Processing sketch for my students to play around with that illustrates proportional control applied to a motor.
You can see the sketch in action here:
Click me! I’ll take you where you want to go!

The white triangle simulates the angle of a motor (Process Variable – this would be the feedback of a real life system provided by a potentiometer for example)
The red line is used to visualise the set point
Click the left mouse button to create a new set point at the angle you clicked at.

The system will then use proportional control to get the motor’s angle to match the set point’s angle.

You can also press the top left button to turn the simulated friction on and off.
Note how proportional control cannot overcome that friction by itself and remember how integral control is used to solve the problem by generating an output that increases in magnitude based on the length of time that the error exists!

FacebookTwitterGoogle+Share

CSE Vision – Part 3 (Desktop Application)

Aloha,

The system has been in use for a few weeks now and it turns out it’s pretty inconvenient for the staff to use.
Even I didn’t feel like logging into the site every time I wanted to change my status.

Solution (well at least before we get the magnetic switches installed):

Make a desktop app that can be set to run at start-up and remembers your username and password so that you don’t have to log in all the time.

Here’s what it looks like:
CSE Vision Desktop Application

Video of it in action:

So to use it you simply enter your username and password and then click one of the three coloured buttons to change your status.
You can also change the message displayed to the students directly from the app by entering it into the text box and then clicking the “Update” button.

It was surprisingly easy to do. I did it in Visual Studio 2013 in C#.

Below is some of the code used:

     url = "http://ictcms.tut.ac.za/censored.php?message=0&&status="+state.ToString()+"&&staffNo=" + username.Text + "&&pwd=" + password.Text;

                HttpWebRequest request = WebRequest.Create(url) as HttpWebRequest;
                WebResponse response = request.GetResponse();
                StreamReader reader = new StreamReader(response.GetResponseStream());
                string responseText = reader.ReadToEnd();
                infoLabel.Text = responseText;
                string passText = password.Text;
                //encrypt pwd

So essentially C# can perform a WebRequest with a specified URL to which I append the variables that I need to send over to the web page.
I then get a response from the page which is streamed into the StreamReader and displayed onto a label on the form.
Thereafter I encrypt the password before storing it along with the username.

I simply store this into a file. This file is then read from when the form is loaded and its contents then populates the username and password fields.

Here is a nice tutorial on basic file handling in C#:
https://msdn.microsoft.com/en-us/library/8bh11f1k.aspx

The WebRequest used above relies of course on you having a php file that can get the data from that URL and store it into the database.

That is simply done as seen here (the GET part):

$staffNo = $_GET['staffNo'];
$status = $_GET['status'];
$pwd = $_GET['pwd'];
$message = $_GET['message'];
FacebookTwitterGoogle+Share

CSE Vision – Part 2 (deployment and webcam feed)

Aloha,

Part 2 is now complete 🙂

Took a while due to nobody being here… Who needs holidays anyway? I mean really now. 😛

The website has been deployed to a local web server meaning that everyone can now access it from anywhere on any device:

http://ictcms.tut.ac.za/csevision/view.php

Lecturers all have access to their accounts and are able to update their availability in real time:

Availability updater as seen from mobile

View from mobile phone

The PC in the lobby now starts up automatically after power failures and starts Chrome in full screen mode with the web page displayed.

Also, I have just installed a webcam and set up a live feed of the lobby. Useful for the secretary and for staff wanting to see who is waiting to see them. Have a peek below (it does not seem to update on Internet Explorer though – why would you be using IE anyway???):

EDIT(2016):

Removed this firstly due to eyspyfx no longer existing and secondly due to privacy issues. The webcam feed is now only available on the local network and running at a much better frame rate.

 

When we get a USB extension cable we’ll move the camera to the corner of the room.

Up next:

Testing out the magnetic switches that will be attached to the doors. Yes they have arrived 🙂 :

Magnetic switch

I would also like to make my own web feed instead of using eyespyfx (will get a much better frame rate locally if I do that) but we’ll see if I get time for that.

 

 

 

FacebookTwitterGoogle+Share

CSE Vision – Part 1

First look at the project

Greetings earthlings,

This will be the first of multiple posts where I document the progress of CSE Vision.
CSE Vision is a project that myself and my colleague Jan Spies  at TUT – Computer Systems Engineering have been working on for a few days now.

CSE Vision

The idea for this project is to create a system that can be placed in the lobby of our offices to inform anyone who comes in of the following information regarding the lecturers in the department:

  • The status of our lecturers (we are going with AVAILABLE, BUSY and UNAVAILABLE for now)
  • Email address
  • Telephone extension
  • Office Number
  • Consultation times
  • Time table
  • A customizable message

The system should update in real time and allow the lecturers to modify all their information.

The end goal is to have magnetic switches installed on the lecturer doors that when activated or deactivated automatically change the status of the lecturers on the system.

Why CSE Vision?

This project aims to solve quite a few problems:

  • People (students and staff) are not aware of who is actually lecturing at our department
  • Trying to keep track of lectures can be a difficult task
  • Students entering the department looking for a certain lecturer and then asking every other lecturer they encounter if they know where said lecturer is (this gets pretty annoying)
  • Students knocking on lecturer’s doors when they are busy and cannot be interrupted (such as when setting up exam papers)
  • Our department offers great content but our physical location has nothing to set us apart in any way

Current Project Status

At the moment there is a login screen, an update page (where lecturers can update their status and information) and the actual lecturer viewer page. All info updates in real time as it is changed in the DB.

Technologies used:

  • A website made with PHP, HTML, Javascript, AJAX and the ever useful Bootstrap.
  • Database is MySQL

A screenshot of the project running on my low res laptop (click to see it enlarged – same as article’s featured image):

First look at the project

Initial setup

The different colours used for the 3 states are visible here. Names and emails that are too long to fit on the “cards” are made to scroll. The custom messages written by the lecturers also scroll.

We ended up displaying minimal information with an emphasis on the visibility of the photos and the status colour.

Deciding what to put on the cards as well as the actual design of the interface actually took longer than actually getting the functionality going… typical.

 

The idea is that users can click on the individual cards to get more information about the lecturers such as time tables etc.

The actual project will be running on an HD screen that will be able to display 5X5 rows of lecturers. This will be enough for the entire department.

 

What’s next?

 

We need to move the site from local to our web server.

We also of course still need to get the Arduino Ethernet boards running and install the switches on the doors.

Get a Fit PC 2 running that can display the website in the lobby.

We then also need to attach the screen to the glass portal.

FacebookTwitterGoogle+Share

Accessing the 8 elements surrounding an index in a 2D grid array

When attempting to do things such as implementing a 2D path-finding algorithm, you are required at times to access the 8 surrounding tiles of the tile you are currently working on and get or set properties of those tiles. This is easy, right? Increment/decrement x and y index values of the array as needed, e.g., increment x by one, increment y by one, increment x and y by one etc.

This is illustrated in the professionally drawn picture below:

DSC_0112

The first (terrible) method of achieving this that comes to mind is to just use 8 statements to check each element individually like so (I will just use C# syntax but this can, of course work in any other language):

//mapArr is the 2D array that we are working with here for illustrative purposes
//left tile
mapArr[unit_x-1,unit_y];
//right tile
mapArr[unit_x+1,unit_y];
//top tile
mapArr[unit_x,unit_y-1];
//bottom tile
mapArr[unit_x,unit_y+1];
//top left tile
mapArr[unit_x-1,unit_y-1];
//top right tile
mapArr[unit_x+1,unit_y-1];
//bottom left tile
mapArr[unit_x-1,unit_y+1];
//bottom right tile
mapArr[unit_x+1,unit_y+1];

Just typing that without any of the “if statement checking” that would likely need to accompany this is TIRING and TEDIOUS.

Other than that though, what is so terrible about doing this?

1) It’s very easy to make a mistake while copy-pasting the same line over and over again and then editing it.
2) it looks TERRIBLE and makes your code far longer than it needs to be.
3) If you need to make a change to this code at a later point, you will have to go and make changes at each one of these 8 chunks…

A better solution would be to use a nested for loop:

for (int x = -1; x < 2; x++)
   for (int y = -1; y < 2; y++)
   {
       mapArr[unit_x + x, unit_y + y];
   }

See how nice and clean that looks?

The only potential problem with this is that it does not only access the 8 surrounding tiles. It also accesses the central tile (when both x and y are equal to zero). This may or may not be a problem. If it is, you can simply add an if statement to check for that scenario:

for (int x = -1; x < 2; x++)
   for (int y = -1; y < 2; y++)
  {
       if( !(x == 0 && y == 0);
       mapArr[unit_x + x, unit_y + y];  
  }
FacebookTwitterGoogle+Share

Smoothing with an auto-tiling algorithm

Ok, so my latest experiment was implementing a 2D auto-tiling or smoothing algorithm without using any actual tile sets.

You can see it in action here:
http://stonium.co.za/smoothing/

As I used no tile sets, all tiles are drawn solely via a couple of Processing’s basic drawing functions. Namely rect() and quad().
I’ve left the grid view on so that one can more clearly see the effect that placing a tile has on the surrounding tiles quadrants.

The algorithm that I implemented can be seen here:
http://www.codeproject.com/Articles/106884/Implementing-Auto-tiling-Functionality-in-a-Tile-M

This algorithm makes smoothing easy!

FacebookTwitterGoogle+Share

Random Draw

First experimental type thing complete!

–Click To See Random Draw–

I decided to mess around with the random function in Processing and see just how random it is and if it is “equally” random (you’ll see what I mean just now).


User Interaction

I wanted to make it slightly more interesting by allowing the user to affect the graphical output without affecting the purpose of the sketch.
Users can use the mouse scroll wheel to dynamically change the size of the circles that are drawn to screen.
They can also click on the “Change Mode” button to change how the circles are drawn to the screen.
In one mode they are drawn uniformly to the screen in rows, in another they are drawn to random x,y co-ordinates and in yet another they are drawn at the location of the mouse cursor.

Once the user is done messing around with it they can click the “Submit” button which stops the process and sends all the data to the database.

What It Does

So what this sketch does is it generates circles in random colours and then draws them onto the screen in a variety of ways.
Each colour consists of Red, Green and Blue (RGB) components. Now these individual R, G and B values are generated by Processing’s random() function. Each component randomed individually for every circle.
The program then checks to see which colour component of the circle was the largest (R,G or B) and increments the appropriate counter.
e.g.
A circle is drawn with R(200), G(164) and B(234). Blue has the highest value so the blue counter is incremented.
There are counters for Red, Green, Blue, Shades of grey, Black and white.
Shades of grey are colours where all 3 RGB values are equal but not equal to 0 or 255.
When they are all 0 it means a black circle will be drawn and the black counter is incremented and when they are all 255 a white circle is drawn and the white counter is incremented.

It is interesting to note how rarely the pure black and pure white colours are generated.
So far the R, G and B counters seem to always be relatively equal which of course should be the case statistically.

FacebookTwitterGoogle+Share