Block programming

Block server

The feedback we received from the open house was mostly concerned with admin functionality and expanding the queue system. This means that we can now store user names and a program id, among other things. This was done to enable the frontend to display the queue for the user. The changes to the queue system also enabled us to enable editing programs already in the queue, and to remove programs from the queue.

During this sprint we also worked on documentation, both documenting the API as well as writing the readme according to our standard.

We also implemented a filter that disallows programs if sent within a certain timeframe, or if the same user tries to add multiple copies of the same program.

Blockly

The open house with Luleå Makerspace gave us a lot of work, mainly with showing more information to the children about the queue and their programs status on the server. This was becase most children don’t have the patience tow ait for their program to run when they don’t have an estimated time to wait or a known place in the queue(which updates when a program is executed). The children also wanted to be able to edit their program once queued so they don’t need to requeue another program id they make a small change.

Another feature currently in progress is a small admin interface where LMS employees can modify the queue by deleting programs, clearing the queue and pause/resume the execution of programs.

The final main feedback was that the children wanted to bea ble to do maths and use variables when programming Pepper, this feature was already in development but was not stable enough befoer the 19th of Novemeber but now it is done by utilizing shadowblocks.

Lastly, the site is now available in both Swedish and English as mentioned as a request in previous sprints. This took longer than wanted to fix because Blockly could not change language once it has rendered for some obscure reason and therefore an alternative solution was a pre-editor page or homepage where the user can choose language. This page also introduced the basic/advanced mode choice, the basic mode can fully program Pepper but without variables, functions and some other syntaxes in order to be more friendly and inviting for beginners.

Web interaction

Firstly we have tried to develop pepper’s interaction with youtube. The intended idea is the same as google and wikipedia, that you enter a title of a video and the correct video will be played. At first it seemed to be working since we could play a video on pepper’s tablet. Though later we came to realize that there was a lack of support for youtube API-calls for python 2.7 and no way to autoplay the videos which resulted in us dropping the youtube interaction.

Then we also had an open house with LMS where we got some feedback on our product. The one feature that was the biggest success was the block-programming and one of the requests that we got was to connect our web-interaction with the block-programming which would enable wikipedia and google image interactions through the block-programming. This is also one of the reasons behind letting the youtube interaction go and instead turn our resources towards getting the web-interaction to work with the block-programming.

Rock-paper-scissors

Following the open house we recevied some improvements that could be made. The participants found Pepper to mostly choose two of the three possible moves and thus the selection of moves should be improved. In addition, Pepper had some issue with finding the participants’ hands and showing what Pepper sees before selecting a gesture on the tablet could solve this issue.

Regarding the gesture selection, we are almost finished with a random selection algorithm based on memorisation. Meaning that the previously selected moves will weigh the probability of the next move to be a move that has not been selected for some time.

For the tablet feature, we have not found any naoqi functionality that supports video streaming and have thus decided to take and show images with a high update frequency instead. However, running our code remotly from Pepper seemed to have some latency and the pictures did not update as fast as we would like. Therefore we have started to run our code locally on Pepper instead, which gives a better frequency but has some flicker when switching images.

Image recognition

The feedback from the open house directed towards the image recognition group was that Pepper would often detect the wrong gesture. To respond to the feedback we retrained our KNN network using a larger dataset, captured from a computer camera. A major future improvement would be to improve both the scale and vararity of the dataset retrived to train our neural networks.

Pepper often chose the wrong hand to detect the gesture. We belive that detection of the hand with the largest coordinate in the vertical axis would solve this, since the hand showing the gesture often is on top.