Jekyll2019-01-20T21:21:37-05:00https://seb.mamessier.com/feed.xmlSebastien MamessierStandard Deviation is where I gather my thoughts and pieces of work that could benefit a broader audience one way or another. You will find academic projects, thoughts about new technologies, software tutorials as well as my resume.Sebastien MamessierIntuitive vs Nonintuitive decision making2016-06-01T15:52:23-05:002016-06-01T15:52:23-05:00https://seb.mamessier.com/2016/06/01/intuitive-vs-nonuntuitive-decisions<p>This blog post summarizes <a href="http://psycnet.apa.org/journals/xge/135/3/409/">Simmons and Nelson’ study</a> about how people incorporate constraining/contradicting information into their initial intuitive thought when asked to make a decision. It was written in the context of the <code class="highlighter-rouge">Intro to Cognitive Science</code> class.</p>
<h3 id="intuitive-confidence-choosing-between-intuitive-and-nonintuitive-alternatives">Intuitive Confidence: Choosing Between Intuitive and Nonintuitive Alternatives</h3>
<p>Authors: Joseph P. Simmons (Princeton University), Leif D. Nelson (New York University)</p>
<h4 id="introduction">Introduction</h4>
<p>People seem to favor intuitive options rather than equally or more valid nonintuitive options when taking decisions. This paper intends to explain how people weigh intuitive answers and nonintuitive alternatives that might oppose their initial intuition.</p>
<p>It has been shown that people often prefer to follow their intuition even when conflicting with other available information, leading to judgment biases. Simmons and Nelson review the relevant related phenomena such as <code class="highlighter-rouge">transparency illusions</code>, <code class="highlighter-rouge">beliefs in explicitly false statements</code> and other biases appearing at different levels of human cognition. It seems plausible that two distinct systems are competing, the first one being responsible for fast, effortless, heuristics and knowledge based decisions whereas the second system - much slower and resource-demanding - attempts to correct the initial judgment using available cues, rules and reasoning.</p>
<p>Some theories indicate that <code class="highlighter-rouge">cognitive lazyness</code> could prevent <code class="highlighter-rouge">System 2</code> from kicking in and contributing to the decision. Others state that the sequential nature of the judgment process advantages the primity of the intuitive thought as <code class="highlighter-rouge">System 2</code> has to falsify <code class="highlighter-rouge">System 1</code>’s conclusions persuasively. This phenomena is commonly refered to as <code class="highlighter-rouge">anchoring and adjustment</code>.</p>
<p>However, such models can’t extensively explain the observed proportion of intuitive biases among motivated reasoners who fully process contradictory information. Simmons and Nelson propose a so-call <code class="highlighter-rouge">dual-process</code> model that supposedly explains the observed phenomena and makes relevant predictions.</p>
<h4 id="simmon-and-nelsons-model">Simmon and Nelson’s model</h4>
<p>The authors’s model relies on four hypotheses:</p>
<p>1) Intuitions are chosen more often because people hold them with high Confidence</p>
<p>2) The magnitude of an opposing piece of information matters for invalidating intuition.</p>
<p>3) People who are more confident about their intuition will follow them more often</p>
<p>4) People who <code class="highlighter-rouge">betray</code> their intuition are less confident with their final choice.</p>
<p>These rather natural hypotheses are then evaluated by the authors using predition of sporting events as a field case as it provides the experiment with the required variability of inputs magnitude and intuitive confidence.</p>
<p>The example of the football bookmaker <code class="highlighter-rouge">point spread</code> concept is demonstrated, attempting to explain how most people handle this question. In this case, the initial intuition will answer the question <code class="highlighter-rouge">which team will win</code> and the point spread serves as the constraining information where further reasoning should be involved. All four hypotheses can be nicely instantiated against this scenario and evaluated against historical data.</p>
<p>The following bullets summarize the result of Simmon and Nelson’s studies regarding the point spread example:</p>
<ul>
<li>
<p>Most people bet on favorites in 90% of the games despite of the the point spread. Hypothesis 1 is verified (holds for rookies and experts)</p>
</li>
<li>
<p>The spread magnitude negatively impacts this proportion. Hyp 2 is correct (holds for rookies and experts)</p>
</li>
<li>
<p>People betting on underdogs showed less confidence on the outcome, which seems to verify hypothesis 4 is verified.</p>
</li>
<li>
<p>Finally, people bet for favorites even when setting the spread themselves. This study was design to eliminate the hypothesis that people simply don’t understand the <code class="highlighter-rouge">point spread</code> concept when betting on football results. Moreover, this part of the study also verifies hypothesis 4.</p>
</li>
</ul>
<h4 id="conclusion">Conclusion</h4>
<p>The main finding of this article seems to be that confidence in intuition is the most important factor influencing intuitive versus nonintuitive decisions. Countermeasures are proposed such as artificially altering one’s confidence against their own intuition. Another possible explanation is that people tend to answer a relaxed version of a question (in this case, who will win instead of who will beat the point spread), or sometimes even a different question (which team do you prefer).</p>Sebastien MamessierThis blog post summarizes Simmons and Nelson’ study about how people incorporate constraining/contradicting information into their initial intuitive thought when asked to make a decision. It was written in the context of the Intro to Cognitive Science class. Intuitive Confidence: Choosing Between Intuitive and Nonintuitive Alternatives Authors: Joseph P. Simmons (Princeton University), Leif D. Nelson (New York University) Introduction People seem to favor intuitive options rather than equally or more valid nonintuitive options when taking decisions. This paper intends to explain how people weigh intuitive answers and nonintuitive alternatives that might oppose their initial intuition. It has been shown that people often prefer to follow their intuition even when conflicting with other available information, leading to judgment biases. Simmons and Nelson review the relevant related phenomena such as transparency illusions, beliefs in explicitly false statements and other biases appearing at different levels of human cognition. It seems plausible that two distinct systems are competing, the first one being responsible for fast, effortless, heuristics and knowledge based decisions whereas the second system - much slower and resource-demanding - attempts to correct the initial judgment using available cues, rules and reasoning. Some theories indicate that cognitive lazyness could prevent System 2 from kicking in and contributing to the decision. Others state that the sequential nature of the judgment process advantages the primity of the intuitive thought as System 2 has to falsify System 1’s conclusions persuasively. This phenomena is commonly refered to as anchoring and adjustment. However, such models can’t extensively explain the observed proportion of intuitive biases among motivated reasoners who fully process contradictory information. Simmons and Nelson propose a so-call dual-process model that supposedly explains the observed phenomena and makes relevant predictions. Simmon and Nelson’s model The authors’s model relies on four hypotheses: 1) Intuitions are chosen more often because people hold them with high Confidence 2) The magnitude of an opposing piece of information matters for invalidating intuition. 3) People who are more confident about their intuition will follow them more often 4) People who betray their intuition are less confident with their final choice. These rather natural hypotheses are then evaluated by the authors using predition of sporting events as a field case as it provides the experiment with the required variability of inputs magnitude and intuitive confidence. The example of the football bookmaker point spread concept is demonstrated, attempting to explain how most people handle this question. In this case, the initial intuition will answer the question which team will win and the point spread serves as the constraining information where further reasoning should be involved. All four hypotheses can be nicely instantiated against this scenario and evaluated against historical data. The following bullets summarize the result of Simmon and Nelson’s studies regarding the point spread example: Most people bet on favorites in 90% of the games despite of the the point spread. Hypothesis 1 is verified (holds for rookies and experts) The spread magnitude negatively impacts this proportion. Hyp 2 is correct (holds for rookies and experts) People betting on underdogs showed less confidence on the outcome, which seems to verify hypothesis 4 is verified. Finally, people bet for favorites even when setting the spread themselves. This study was design to eliminate the hypothesis that people simply don’t understand the point spread concept when betting on football results. Moreover, this part of the study also verifies hypothesis 4. Conclusion The main finding of this article seems to be that confidence in intuition is the most important factor influencing intuitive versus nonintuitive decisions. Countermeasures are proposed such as artificially altering one’s confidence against their own intuition. Another possible explanation is that people tend to answer a relaxed version of a question (in this case, who will win instead of who will beat the point spread), or sometimes even a different question (which team do you prefer).Sébastien Mamessier2016-05-15T19:39:26-05:002016-05-15T19:39:26-05:00https://seb.mamessier.com/2016/05/15/cv<div style="float:right; padding-left:20px;">
<h4>CV/Resume</h4>
<a href="../docs/misc/cv.pdf">Here</a> you can find my CV in a pdf format.
</p>
<p>
<img src="https://seb.mamessier.com/assets/images/2016/05/Seb2.JPG" width="400px" />
</div>
<!--<iframe src = "/ViewerJS/#/docs/misc/cv.pdf" width='800px' height='600px' allowfullscreen webkitallowfullscreen style="margin-left:auto; margin-right:auto; max-width:90%"></iframe>-->
I'm currently a Senior Graduate Researcher and Robotics PhD candidate at Georgia Tech School of Aerospace, working on adaptive controls for semi-autonomous cars as well as collaborative artificial intelligence for future commercial aviation cockpits.
During my curriculum, I worked for two and a half years at Airbus Group Innovations in Munich, Germany, developing cockpit / flight automation concepts and evaluating them using Virtual Reality experiments. Before that, I received a M.S in Aerospace Engineering from Georgia Tech and a Diplome d'Ingenieur from Supaero (France) in 2013.
My PhD thesis work focuses on computational modeling of safe and efficient Human-AI collaboration. Potential applications of my research span from flight deck automation to integration of adaptive controls in semi-automated cars.
My fields of expertise and interests include Human Factors, Controls, Artificial Intelligence and Software Engineering. Private pilot since 2013, I'm a big soccer fan, practice astronomy and contribute to open source projects.
</p></div>Sebastien MamessierCV/Resume Here you can find my CV in a pdf format. </p> </div> I'm currently a Senior Graduate Researcher and Robotics PhD candidate at Georgia Tech School of Aerospace, working on adaptive controls for semi-autonomous cars as well as collaborative artificial intelligence for future commercial aviation cockpits. During my curriculum, I worked for two and a half years at Airbus Group Innovations in Munich, Germany, developing cockpit / flight automation concepts and evaluating them using Virtual Reality experiments. Before that, I received a M.S in Aerospace Engineering from Georgia Tech and a Diplome d'Ingenieur from Supaero (France) in 2013. My PhD thesis work focuses on computational modeling of safe and efficient Human-AI collaboration. Potential applications of my research span from flight deck automation to integration of adaptive controls in semi-automated cars. My fields of expertise and interests include Human Factors, Controls, Artificial Intelligence and Software Engineering. Private pilot since 2013, I'm a big soccer fan, practice astronomy and contribute to open source projects.HLA Certi, getting started !2016-04-10T20:43:25-05:002016-04-10T20:43:25-05:00https://seb.mamessier.com/2016/04/10/hla-certi-getting-started<p>HLA stands for High Level Architecture, a set of specifications for distributed simulation systems that originated from the US Department of Defense (DoD) around 1996. (Needs citation)</p>
<h4 id="hla-overview">HLA Overview</h4>
<h6 id="concepts">Concepts</h6>
<p>ToDO</p>
<h6 id="features">Features</h6>
<h4 id="certi">Certi</h4>
<p>Certi is an open-source implementation of HLA specifications. It provides a HLA Runtime Infrastruture (RTI), a Federation Ambassador module and APIs to build a HLA federation.</p>
<h4 id="compiling-certi">Compiling CERTI</h4>
<h6 id="predependencies">Predependencies</h6>
<p>You will need to install <code class="highlighter-rouge">Cmake</code> and have <code class="highlighter-rouge">GCC</code> installed on your Linux system. I found it mandatory to install <code class="highlighter-rouge">YACC</code> and <code class="highlighter-rouge">LEX</code> as well. <code class="highlighter-rouge">libxml2-dev</code> is optional, you want to install it to be able to export/import federations.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo apt-get install build-essential cmake sudo byacc flex libxml2-dev
</code></pre></div></div>
<h6 id="build-certi-from-sources">Build Certi from sources</h6>
<p>The first thing to do to work with Certi is to get the sources from Certi’s git forge.
Then create a build folder.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone git://git.savannah.nongnu.org/certi.git
<span class="nb">cd </span>certi
<span class="nb">mkdir </span>build
<span class="nb">cd </span>build
</code></pre></div></div>
<p>Run Cmake by picking an installation folder <code class="highlighter-rouge">installFolder</code> and compile with make. In <code class="highlighter-rouge">make -jn</code>, replace <code class="highlighter-rouge">n</code> with the number of cores to speed up compilation.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cmake <span class="nt">-DCMAKE_ISNTALL_PREFIX</span><span class="o">=</span>/installFolder ..
make <span class="nt">-j4</span>
make <span class="nb">install</span>
</code></pre></div></div>
<p>When using recent versions of GCC, compilation might fail with the following error message:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">[</span> 66%] Linking CXX executable TestFedTime
libFedTimed.so.1.0.0: undefined reference to <span class="sb">`</span>typeinfo <span class="k">for </span>RTI::Exception<span class="sb">`</span>
</code></pre></div></div>
<p>This is due to a cyclical dependency (FedTime is supposed to throw RTI::Exception) not declared in CMake (to make it work). However, throwing exceptions requires a <a href="https://gcc.gnu.org/wiki/Visibility">typeinfo lookup</a> which triggers GCC’s error. Making RTI::Exception destructor purely virtual seems to fix the problem. (Probably because in the absence of non-inline virtual methods, GCC copies the typeinfo to all relevant translation units, <a href="https://gcc.gnu.org/onlinedocs/gcc/Vague-Linkage.html">see here</a>.</p>
<p>Last step is to run a script to setup the approriate environment variables.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">source </span>installFolder/myCERTI_env.sh
</code></pre></div></div>
<p>I recommand to add this line at the end of your <code class="highlighter-rouge">~/.bashrc</code> file so that you don’t have to rerun it everytime.</p>
<h4 id="getting-started-with-hla-certi">Getting started with HLA Certi</h4>
<p>Now we want to run the CERTI tutorial.
First, clone the applications repository</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone git://git.savannah.nongnu.org/certi/applications.git Certi-Apps
</code></pre></div></div>
<p>It’s now time to build the tutorial:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cd Certi-Apps/HLA_Tutorial
mkdir build
cd build
cmake -DCMAKE_ISNTALL_PREFIX=appsInstallFolder ..
make
make install
</code></pre></div></div>
<p>Now open three terminals and run respectively:</p>
<ol>
<li><code class="highlighter-rouge">cd</code> into <code class="highlighter-rouge">appsInstallFolder/share/federations</code>. Launch the Runtime <code class="highlighter-rouge">rtig</code></li>
<li>Launch the tutorial controller app <code class="highlighter-rouge">appsInstallFolder/bin/controllerFederate</code></li>
<li>Launch the tutorial process app <code class="highlighter-rouge">appsInstallFolder/bin/processFederate</code></li>
</ol>
<p>Then terminal <code class="highlighter-rouge">2</code> should guide you through the the tutorial, make sure you observe the three terminals outputs during the execution of the tutorial.</p>
<p>Sources:</p>
<ul>
<li>https://gcc.gnu.org/wiki/Visibility</li>
<li>https://gcc.gnu.org/onlinedocs/gcc/Vague-Linkage.html</li>
<li>http://www.nongnu.org/certi/certi_doc/Install/html/build.html</li>
</ul>Sebastien MamessierHLA stands for High Level Architecture, a set of specifications for distributed simulation systems that originated from the US Department of Defense (DoD) around 1996. (Needs citation) HLA Overview Concepts ToDO Features Certi Certi is an open-source implementation of HLA specifications. It provides a HLA Runtime Infrastruture (RTI), a Federation Ambassador module and APIs to build a HLA federation. Compiling CERTI Predependencies You will need to install Cmake and have GCC installed on your Linux system. I found it mandatory to install YACC and LEX as well. libxml2-dev is optional, you want to install it to be able to export/import federations. sudo apt-get install build-essential cmake sudo byacc flex libxml2-dev Build Certi from sources The first thing to do to work with Certi is to get the sources from Certi’s git forge. Then create a build folder. git clone git://git.savannah.nongnu.org/certi.git cd certi mkdir build cd build Run Cmake by picking an installation folder installFolder and compile with make. In make -jn, replace n with the number of cores to speed up compilation. cmake -DCMAKE_ISNTALL_PREFIX=/installFolder .. make -j4 make install When using recent versions of GCC, compilation might fail with the following error message: [ 66%] Linking CXX executable TestFedTime libFedTimed.so.1.0.0: undefined reference to `typeinfo for RTI::Exception` This is due to a cyclical dependency (FedTime is supposed to throw RTI::Exception) not declared in CMake (to make it work). However, throwing exceptions requires a typeinfo lookup which triggers GCC’s error. Making RTI::Exception destructor purely virtual seems to fix the problem. (Probably because in the absence of non-inline virtual methods, GCC copies the typeinfo to all relevant translation units, see here. Last step is to run a script to setup the approriate environment variables. source installFolder/myCERTI_env.sh I recommand to add this line at the end of your ~/.bashrc file so that you don’t have to rerun it everytime. Getting started with HLA Certi Now we want to run the CERTI tutorial. First, clone the applications repository git clone git://git.savannah.nongnu.org/certi/applications.git Certi-Apps It’s now time to build the tutorial: cd Certi-Apps/HLA_Tutorial mkdir build cd build cmake -DCMAKE_ISNTALL_PREFIX=appsInstallFolder .. make make install Now open three terminals and run respectively: cd into appsInstallFolder/share/federations. Launch the Runtime rtig Launch the tutorial controller app appsInstallFolder/bin/controllerFederate Launch the tutorial process app appsInstallFolder/bin/processFederate Then terminal 2 should guide you through the the tutorial, make sure you observe the three terminals outputs during the execution of the tutorial. Sources: https://gcc.gnu.org/wiki/Visibility https://gcc.gnu.org/onlinedocs/gcc/Vague-Linkage.html http://www.nongnu.org/certi/certi_doc/Install/html/build.htmlOptimizing the Dell XPS 15 9550 for Ubuntu 16.04 / 16.102016-04-06T04:34:26-05:002016-04-06T04:34:26-05:00https://seb.mamessier.com/2016/04/06/dell-xps-15-9550-ubuntu-16-04<p>This laptop is a combination of beauty and performance that many developers will appreciate. Therefore Windows is not alway the best choice and one might greatly benefit from a dual boot setting. This article describes how to setup efficiently Ubuntu 16.04 on the XPS 15 9550.</p>
<h5 id="installation">Installation</h5>
<p>I followed the steps shown on <a href="http://ubuntuforums.org/showthread.php?t=2317843">this page</a> and it seemed to work quite well. I suggest to use <a href="https://rufus.akeo.ie/">Rufus</a> to create the Ubuntu bootable USB disk on Windows.
I just had to launch the Live CD first and then install from there (The direct install option was buggy somehow).
You get something quite convincing out of the box, but there is still some work to get everything work perfectly.</p>
<h5 id="freezes-on-windows-dual-boot">Freezes on Windows (dual boot)</h5>
<p>After switching from Raid to AHCI - as suggested in <a href="http://ubuntuforums.org/showthread.php?t=2317843">this tutorial</a> - you may encounter BSOD freezes - with the only hint being <code class="highlighter-rouge">CRITICAL_PROCESS_DIED</code>. It seems to be due to the SSD driver, installing the <a href="http://www.samsung.com/global/business/semiconductor/minisite/SSD/global/html/support/downloads.html">last Samsung 950 pro drivers</a> solved the problem for me.</p>
<h5 id="hidpi">HiDpi</h5>
<p>For 4K (HiDpi) screens, Gnome allows you to have everything scaled up (same as in windows 10). For this just go in <code class="highlighter-rouge">System settings > Display</code> and use the <code class="highlighter-rouge">Scale for menu and bars</code> slider to something like <code class="highlighter-rouge">2.25</code>. This was good for me. Qt-based apps might not use this settings. (Example: Qt Creator / Tex Studio, …)</p>
<p>For QT5 applications, ArchiWiki’s <a href="https://wiki.archlinux.org/index.php/HiDPI#Qt_5">HiDpi page</a> rightfully suggests to create the file <code class="highlighter-rouge">/etc/profile.d/qt-hidpi.sh</code>, give it execution permission (<code class="highlighter-rouge">sudo chmod +x /etc/profile.d/qt-hidpi.sh</code>). You will need to restart Ubuntu to account for this change.</p>
<h6 id="multi-screen-setup">Multi-screen setup</h6>
<p>I found it very painful to independently setup HiDpi settings for multiple screens. (For instance, one 1080p external monitor along with your 4k laptop screen). I did not find any solution using the Nvidia drivers (<code class="highlighter-rouge">xrandr</code> crashes), and the following setup - from <a href="https://wiki.archlinux.org/index.php/HiDPI">here</a> - works when switching over to Intel (using <code class="highlighter-rouge">nvidia-settings</code>).</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>xrandr --output eDP-1 --auto --output HDMI-1 --auto --panning 3840x2160+3840+0 --scale 2x2 --right-of eDP-1
</code></pre></div></div>
<p>This will allow you to have your external monitor on the right of your 4k laptop. Make sure to increase the Scale in Ubuntu’s Displays GUI as well.</p>
<h5 id="graphics">Graphics</h5>
<p>Nvidia drivers for Linux seemed pretty unstabled on the XPS until I tried version 375.xx which seems to work very well.</p>
<p>First, add the ppa containing up-to-date packages:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
</code></pre></div></div>
<p>Then you can install the drivers using</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo apt-get install nvidia-375
</code></pre></div></div>
<p>You can switch back and forth between different graphics drivers using the <code class="highlighter-rouge">Additional drivers</code> Gnome GUI.</p>
<h5 id="bluetooth">Bluetooth</h5>
<p>Out of the box, bluetooth can’t find any device. Following this dark magic steps (from the <a href="http://ubuntuforums.org/showthread.php?t=2317843">ubuntuforum thread</a>) fixed it for me, but I would recommend you to investigate this before blindly apply this fix.</p>
<ol>
<li>Download the firmware from an obscure dropbox https://www.dropbox.com/s/8goc4omhnzxij93/BCM-0a5c-6410.hcd?dl=0</li>
<li><code class="highlighter-rouge">sudo cp BCM-0a5c-6410.hcd /lib/firmware/brcm/</code></li>
<li>Reboot</li>
</ol>
<h6 id="fixing-palm-detection-tested-on-ubuntu-1604">Fixing Palm detection (tested on Ubuntu 16.04)</h6>
<p>Something super annoying that happens when typing is that the palm of your hand accidently taps the touchpad. This has the undesirable effect of jumping the cursor to wherever the mouse is, selecting random chunks of texts and messing up with your input. It took me a while to find a proper solution, this seems to be working - from [here](http://wiki.yobi.be/wiki/Laptop_Dell_XPS_15](http://wiki.yobi.be/wiki/Laptop_Dell_XPS_15).</p>
<p>Add to the file <code class="highlighter-rouge">/etc/modprobe.d/blacklist.conf</code> the following line:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>blacklist i2c_designware-platform
</code></pre></div></div>
<p>Reboot your linux system.</p>
<h6 id="energy-comsumption">Energy comsumption</h6>
<p>The battery lasts longer on Windows than Linux, so there is a lot of room for improvement on this aspect. It may be due to Skylake processors power management which still needs some tuning on Linux. The first thing to do to save battery is to use the Intel GPU (for this install <code class="highlighter-rouge">nvidia-prime</code>, then the program Nvidia Settings will allow you to switch). I run some battery tests using <code class="highlighter-rouge">powertop</code> and the best I can get with Ubuntu (With the 4K model) is <code class="highlighter-rouge">10.5W</code> when idle with screen brightness on <code class="highlighter-rouge">20%</code>, <code class="highlighter-rouge">12W</code> with Wifi/Firefox on. About twice as much when using Nvidia GPU. I tested Linux kernel 4.4.2 and found no difference with 4.4.0 present in Xenial by default. I’ll keep investigating this as it appears Windows manages to only use 5W when idle.</p>
<h5 id="update-may-2016-testing-kernel-46">(Update May 2016) Testing Kernel 4.6</h5>
<p>Linux Kernel 4.6 got released seemingly with improvements regarding Skylake architectures. Let’s test it !
It is preferable to switch from Nvidia drivers to the free driver before install the new kernel (Got a bunch of errors before I switched), you can do that in the Ubuntu <code class="highlighter-rouge">Additional drivers</code> utility program. Unfortunately, energy consumption did not improve, however, I feel startup and shutdown times improved and dual screen setup is much more stable! (No more freezes).
Also, a very annoying bug was making any chromium-based IDE (Atom, Visual Studio Code) very laggy using Intel graphics, this seems to be fixed!</p>
<h5 id="update-feb-2017-ubuntu-1610-yakety-yak">(Update Feb 2017) Ubuntu 16.10 Yakety Yak</h5>
<p>Last Ubuntu update installs Kernel 4.8 which can only be good for your Skylake CPU. Did not have time for a quantitative analysis yet, but the system is stable.</p>
<p>Sources</p>
<ul>
<li><a href="http://wiki.yobi.be/wiki/Laptop_Dell_XPS_15">http://wiki.yobi.be/wiki/Laptop_Dell_XPS_15</a></li>
<li><a href="http://ubuntuforums.org/showthread.php?t=2317843">http://ubuntuforums.org/showthread.php?t=2317843</a></li>
</ul>Sebastien MamessierThis laptop is a combination of beauty and performance that many developers will appreciate. Therefore Windows is not alway the best choice and one might greatly benefit from a dual boot setting. This article describes how to setup efficiently Ubuntu 16.04 on the XPS 15 9550. Installation I followed the steps shown on this page and it seemed to work quite well. I suggest to use Rufus to create the Ubuntu bootable USB disk on Windows. I just had to launch the Live CD first and then install from there (The direct install option was buggy somehow). You get something quite convincing out of the box, but there is still some work to get everything work perfectly. Freezes on Windows (dual boot) After switching from Raid to AHCI - as suggested in this tutorial - you may encounter BSOD freezes - with the only hint being CRITICAL_PROCESS_DIED. It seems to be due to the SSD driver, installing the last Samsung 950 pro drivers solved the problem for me. HiDpi For 4K (HiDpi) screens, Gnome allows you to have everything scaled up (same as in windows 10). For this just go in System settings > Display and use the Scale for menu and bars slider to something like 2.25. This was good for me. Qt-based apps might not use this settings. (Example: Qt Creator / Tex Studio, …) For QT5 applications, ArchiWiki’s HiDpi page rightfully suggests to create the file /etc/profile.d/qt-hidpi.sh, give it execution permission (sudo chmod +x /etc/profile.d/qt-hidpi.sh). You will need to restart Ubuntu to account for this change. Multi-screen setup I found it very painful to independently setup HiDpi settings for multiple screens. (For instance, one 1080p external monitor along with your 4k laptop screen). I did not find any solution using the Nvidia drivers (xrandr crashes), and the following setup - from here - works when switching over to Intel (using nvidia-settings). xrandr --output eDP-1 --auto --output HDMI-1 --auto --panning 3840x2160+3840+0 --scale 2x2 --right-of eDP-1 This will allow you to have your external monitor on the right of your 4k laptop. Make sure to increase the Scale in Ubuntu’s Displays GUI as well. Graphics Nvidia drivers for Linux seemed pretty unstabled on the XPS until I tried version 375.xx which seems to work very well. First, add the ppa containing up-to-date packages: sudo add-apt-repository ppa:graphics-drivers/ppa sudo apt update Then you can install the drivers using sudo apt-get install nvidia-375 You can switch back and forth between different graphics drivers using the Additional drivers Gnome GUI. Bluetooth Out of the box, bluetooth can’t find any device. Following this dark magic steps (from the ubuntuforum thread) fixed it for me, but I would recommend you to investigate this before blindly apply this fix. Download the firmware from an obscure dropbox https://www.dropbox.com/s/8goc4omhnzxij93/BCM-0a5c-6410.hcd?dl=0 sudo cp BCM-0a5c-6410.hcd /lib/firmware/brcm/ Reboot Fixing Palm detection (tested on Ubuntu 16.04) Something super annoying that happens when typing is that the palm of your hand accidently taps the touchpad. This has the undesirable effect of jumping the cursor to wherever the mouse is, selecting random chunks of texts and messing up with your input. It took me a while to find a proper solution, this seems to be working - from [here](http://wiki.yobi.be/wiki/Laptop_Dell_XPS_15](http://wiki.yobi.be/wiki/Laptop_Dell_XPS_15). Add to the file /etc/modprobe.d/blacklist.conf the following line: blacklist i2c_designware-platform Reboot your linux system. Energy comsumption The battery lasts longer on Windows than Linux, so there is a lot of room for improvement on this aspect. It may be due to Skylake processors power management which still needs some tuning on Linux. The first thing to do to save battery is to use the Intel GPU (for this install nvidia-prime, then the program Nvidia Settings will allow you to switch). I run some battery tests using powertop and the best I can get with Ubuntu (With the 4K model) is 10.5W when idle with screen brightness on 20%, 12W with Wifi/Firefox on. About twice as much when using Nvidia GPU. I tested Linux kernel 4.4.2 and found no difference with 4.4.0 present in Xenial by default. I’ll keep investigating this as it appears Windows manages to only use 5W when idle. (Update May 2016) Testing Kernel 4.6 Linux Kernel 4.6 got released seemingly with improvements regarding Skylake architectures. Let’s test it ! It is preferable to switch from Nvidia drivers to the free driver before install the new kernel (Got a bunch of errors before I switched), you can do that in the Ubuntu Additional drivers utility program. Unfortunately, energy consumption did not improve, however, I feel startup and shutdown times improved and dual screen setup is much more stable! (No more freezes). Also, a very annoying bug was making any chromium-based IDE (Atom, Visual Studio Code) very laggy using Intel graphics, this seems to be fixed! (Update Feb 2017) Ubuntu 16.10 Yakety Yak Last Ubuntu update installs Kernel 4.8 which can only be good for your Skylake CPU. Did not have time for a quantitative analysis yet, but the system is stable. Sources http://wiki.yobi.be/wiki/Laptop_Dell_XPS_15 http://ubuntuforums.org/showthread.php?t=2317843Embed documents in your Ghost blog using ViewerJS and Express2016-03-06T23:25:09-05:002016-03-06T23:25:09-05:00https://seb.mamessier.com/2016/03/06/embed-documents-in-your-ghost-blog-using-viewerjs<h2 id="introduction">Introduction</h2>
<p>Whether you are using Ghost as a blogging platform or to host your CV and research projects, it could come handy to embed documents such as PDF presentations or spreadsheets in a post. Embedding documents will ensure that your readers don’t have to download anything which will make their experience smoother.</p>
<h2 id="viewerjs">ViewerJS</h2>
<p><a href="http://viewerjs.org/">ViewerJs</a> is a javascript library making it easy to embed self-hosted documents in pages of your website. It uses Mozilla’s well know <a href="https://mozilla.github.io/pdf.js/">PDF.js</a> and KO’s <a href="http://webodf.org/">WebODF</a> under the hood. A ViewerJS plug-in is available for Wordpress but no such thing seems to exist to ease integration for Ghost bloggers.</p>
<h2 id="ghost-and-files">Ghost and files</h2>
<p>Ghost doesn’t host documents for you as it does with post images. So we will have to setup our own simple file server. We will achieve this using the famous <code class="highlighter-rouge">Express</code> module for NodeJS.
#####Setting up a minimalistic file server</p>
<p>First your will need to download ViewerJS from their <a href="http://viewerjs.org/getit/">GetIt page</a>. From a Linux-based server, you can use the command line:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>wget http://viewerjs.org/releases/ViewerJS-latest.zip
unzip ViewerJS-latest.zip <span class="nt">-d</span> ViewerJS
</code></pre></div></div>
<p>Create a folder somewhere in your server and, and copy the ViewerJS folder inside.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">mkdir </span>docServer
<span class="nb">cp</span> <span class="nt">-R</span> ~/Downloads/ViewerJS/viewerjs-0.5.8/ViewerJS ./docServer/
<span class="nb">cd </span>docServer
</code></pre></div></div>
<p>Now that we are in <code class="highlighter-rouge">docServer</code> we can install the <code class="highlighter-rouge">NodeJS</code> module <code class="highlighter-rouge">Express</code> which will serve the ViewerJS library and the embedded documents.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>npm <span class="nb">install </span>express
</code></pre></div></div>
<p>Finally we create a <code class="highlighter-rouge">server.js</code> file in which we put the minimalistic server code. (We picked 2375 but you can use any available port you want - preferably above 1024 so that you don’t need sudo privileges to run it).</p>
<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">var</span> <span class="nx">express</span> <span class="o">=</span> <span class="nx">require</span><span class="p">(</span><span class="s1">'express'</span><span class="p">);</span>
<span class="kd">var</span> <span class="nx">app</span> <span class="o">=</span> <span class="nx">express</span><span class="p">();</span>
<span class="nx">app</span><span class="p">.</span><span class="nx">use</span><span class="p">(</span><span class="s1">'/ViewerJS'</span><span class="p">,</span> <span class="nx">express</span><span class="p">.</span><span class="kr">static</span><span class="p">(</span><span class="nx">__dirname</span> <span class="o">+</span> <span class="s1">'/ViewerJS'</span><span class="p">));</span>
<span class="nx">app</span><span class="p">.</span><span class="nx">use</span><span class="p">(</span><span class="s1">'/docs'</span><span class="p">,</span> <span class="nx">express</span><span class="p">.</span><span class="kr">static</span><span class="p">(</span><span class="nx">__dirname</span> <span class="o">+</span> <span class="s1">'/data'</span><span class="p">));</span>
<span class="kd">var</span> <span class="nx">server</span> <span class="o">=</span> <span class="nx">app</span><span class="p">.</span><span class="nx">listen</span><span class="p">(</span><span class="mi">2375</span><span class="p">);</span>
</code></pre></div></div>
<p>Let’s now launch the server (We assume that you use <a href="https://github.com/Unitech/pm2">PM2</a> to run Ghost on your production server - if not you should seriously think about it-.)</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pm2 start server.js <span class="nt">--name</span> docServer
</code></pre></div></div>
<h3 id="even-better-on-port-80">Even better on port 80</h3>
<p>We assume that you have a Ghost blog proxied through a web server such as Nginx or Apache and that you have a certain level of privileges allowing to modify virtual host configuration files.</p>
<h5 id="apache">Apache</h5>
<p>Modify your existing Ghost virtualhost to make it look like that:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <VirtualHost *:80>
ServerName yourghostblog.com
ProxyPreserveHost on
ProxyPass /docs/ http://localhost:2375/docs/
ProxyPass /ViewerJS/ http://localhost:2375/ViewerJS/
ProxyPass / http://localhost:2368/
</VirtualHost>
</code></pre></div></div>
<h5 id="nginx">Nginx</h5>
<p>ToDo</p>
<h3 id="synchronizing-files">Synchronizing files</h3>
<p>It is quite annoying to upload documents through ssh. To reduce the pain,you can use <a href="https://www.digitalocean.com/community/tutorials/how-to-use-rsync-to-sync-local-and-remote-directories-on-a-vps">Rsync</a> (On Mac and Linux) to synchronize you documents between you local environment and the server host the Ghost blog.</p>
<p>##Conclusion
Here we go, you can now include embedded documents using ViewerJS instructions, i.e in a Ghost blog post, just use the following HTML</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code><iframe src = "/ViewerJS/#/docs/path/to/your/doc.pdf" width='100%' height='600' allowfullscreen webkitallowfullscreen></iframe>
</code></pre></div></div>
<iframe src="/ViewerJS/#/docs/supaero/trex2A/trexTB20.pdf" width="100%" height="600" allowfullscreen="" webkitallowfullscreen=""></iframe>
<object data="docs/supaero/trex2A/trexTB20.pdf" type="application/pdf" width="100%" height="100%">
<p>It appears you don't have a PDF plugin for this browser.
No biggie... you can <a href="myfile.pdf">click here to
download the PDF file.</a></p>
</object>Sebastien MamessierIntroduction Whether you are using Ghost as a blogging platform or to host your CV and research projects, it could come handy to embed documents such as PDF presentations or spreadsheets in a post. Embedding documents will ensure that your readers don’t have to download anything which will make their experience smoother. ViewerJS ViewerJs is a javascript library making it easy to embed self-hosted documents in pages of your website. It uses Mozilla’s well know PDF.js and KO’s WebODF under the hood. A ViewerJS plug-in is available for Wordpress but no such thing seems to exist to ease integration for Ghost bloggers. Ghost and files Ghost doesn’t host documents for you as it does with post images. So we will have to setup our own simple file server. We will achieve this using the famous Express module for NodeJS. #####Setting up a minimalistic file server First your will need to download ViewerJS from their GetIt page. From a Linux-based server, you can use the command line: wget http://viewerjs.org/releases/ViewerJS-latest.zip unzip ViewerJS-latest.zip -d ViewerJS Create a folder somewhere in your server and, and copy the ViewerJS folder inside. mkdir docServer cp -R ~/Downloads/ViewerJS/viewerjs-0.5.8/ViewerJS ./docServer/ cd docServer Now that we are in docServer we can install the NodeJS module Express which will serve the ViewerJS library and the embedded documents. npm install express Finally we create a server.js file in which we put the minimalistic server code. (We picked 2375 but you can use any available port you want - preferably above 1024 so that you don’t need sudo privileges to run it). var express = require('express'); var app = express(); app.use('/ViewerJS', express.static(__dirname + '/ViewerJS')); app.use('/docs', express.static(__dirname + '/data')); var server = app.listen(2375); Let’s now launch the server (We assume that you use PM2 to run Ghost on your production server - if not you should seriously think about it-.) pm2 start server.js --name docServer Even better on port 80 We assume that you have a Ghost blog proxied through a web server such as Nginx or Apache and that you have a certain level of privileges allowing to modify virtual host configuration files. Apache Modify your existing Ghost virtualhost to make it look like that: <VirtualHost *:80> ServerName yourghostblog.com ProxyPreserveHost on ProxyPass /docs/ http://localhost:2375/docs/ ProxyPass /ViewerJS/ http://localhost:2375/ViewerJS/ ProxyPass / http://localhost:2368/ </VirtualHost> Nginx ToDo Synchronizing files It is quite annoying to upload documents through ssh. To reduce the pain,you can use Rsync (On Mac and Linux) to synchronize you documents between you local environment and the server host the Ghost blog. ##Conclusion Here we go, you can now include embedded documents using ViewerJS instructions, i.e in a Ghost blog post, just use the following HTML <iframe src = "/ViewerJS/#/docs/path/to/your/doc.pdf" width='100%' height='600' allowfullscreen webkitallowfullscreen></iframe> It appears you don't have a PDF plugin for this browser. No biggie... you can click here to download the PDF file.Using Rosbridge and Roslib JS2016-03-06T05:27:53-05:002016-03-06T05:27:53-05:00https://seb.mamessier.com/2016/03/06/understanding-rosbridge-and-roslib-js<h2 id="ros">ROS</h2>
<p>The Robot Operating system (ROS) was originally developped by Willow Garage …</p>
<h2 id="why-rosbridge-">Why Rosbridge ?</h2>
<p>ROS is already a versatile piece of software offering a native C++ library as well as extensive bindings for python. However, other scripting/programming languages cannot directly benefit from the implementation of the ROS communication protocol as it is the case for Javascript or Java. Rosbridge was developed to fill that gap.</p>
<p>####Pros</p>
<ul>
<li>Uses generic network protocols (TCP, Websockets)</li>
<li>Connects web interfaces to the ROS network.</li>
<li>Offers throttling, queueing optimizations.
####Cons</li>
<li>Breaks ROS’ decentralized network architecture.</li>
</ul>
<h2 id="roslib-js">Roslib JS</h2>
<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">var</span> <span class="nx">ros</span> <span class="o">=</span> <span class="k">new</span> <span class="nx">ROSLIB</span><span class="p">.</span><span class="nx">Ros</span><span class="p">({</span>
<span class="na">url</span><span class="p">:</span> <span class="s1">'ws://localhost:9090'</span>
<span class="p">})</span>
</code></pre></div></div>Sebastien MamessierROS The Robot Operating system (ROS) was originally developped by Willow Garage … Why Rosbridge ? ROS is already a versatile piece of software offering a native C++ library as well as extensive bindings for python. However, other scripting/programming languages cannot directly benefit from the implementation of the ROS communication protocol as it is the case for Javascript or Java. Rosbridge was developed to fill that gap. ####Pros Uses generic network protocols (TCP, Websockets) Connects web interfaces to the ROS network. Offers throttling, queueing optimizations. ####Cons Breaks ROS’ decentralized network architecture. Roslib JS var ros = new ROSLIB.Ros({ url: 'ws://localhost:9090' })Projects2016-03-06T04:47:00-05:002016-03-06T04:47:00-05:00https://seb.mamessier.com/2016/03/06/projects<h1 id="academic-projects">Academic projects</h1>
<h2 id="2016">2016</h2>
<p><a href="/proj-2016-ns">Network security and domain blacklists</a></p>Sebastien MamessierAcademic projects 2016 Network security and domain blacklistsHack4Europe - VRide2015-05-26T15:56:00-05:002015-05-26T15:56:00-05:00https://seb.mamessier.com/2015/05/26/hack4europe-vride<p>Hacked in 3 week ends, I coded VRide together with Gregoire Deprez, the goal was to enhance Uber/Lift rides with Augmented Reality experiences. More like a proof of concept, it got use familiarized with Vuforia, AR markers, OpenGL, WebGL, CSS3D and Uber APIs.</p>
<p><img src="https://seb.mamessier.com/assets/images/2016/05/vride-1.png" alt="" /></p>
<p>The video is slightly laggy (available hardware was slighty insufficient, we should try again with newer smartphones!), but gives a good impression of the concept !</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/M1yG-ApbirE" frameborder="0" allowfullscreen=""></iframe>Sebastien MamessierHacked in 3 week ends, I coded VRide together with Gregoire Deprez, the goal was to enhance Uber/Lift rides with Augmented Reality experiences. More like a proof of concept, it got use familiarized with Vuforia, AR markers, OpenGL, WebGL, CSS3D and Uber APIs. The video is slightly laggy (available hardware was slighty insufficient, we should try again with newer smartphones!), but gives a good impression of the concept !Continuous Kalman Filter Scheduling for Situation Awareness in the Cockpit2015-05-26T05:32:00-05:002015-05-26T05:32:00-05:00https://seb.mamessier.com/general/2015/05/26/untitled-continuous-kalman-filter-scheduling-for-situation-awareness-in-the-cockpit<p>Ongoing research in Cognitive Engineering proposes to model an ideal pilot as an optimal state estimator. Control theory tackled the problem of optimally scheduling the allocation of sensors to track multiple correlated targets, using results from operations research. Combining the findings of both disciplines could help with providing a quantitative indicator for best-case performance of the flight crew as a result of the interaction of the aircraft/auto-flight system dynamics, physiological constraints, cockpit interfaces and pilot monitoring patterns. This project investigates the addition of realistic human-related constraints derived from experimental pilot studies and geometrical constraints on the cost function used in the Kalman filter scheduling problem.</p>
<iframe src="https://drive.google.com/file/d/0B4oD9uzoUfEGakVIY3FqbktpODA/preview" width="100%" height="480"></iframe>Sebastien MamessierOngoing research in Cognitive Engineering proposes to model an ideal pilot as an optimal state estimator. Control theory tackled the problem of optimally scheduling the allocation of sensors to track multiple correlated targets, using results from operations research. Combining the findings of both disciplines could help with providing a quantitative indicator for best-case performance of the flight crew as a result of the interaction of the aircraft/auto-flight system dynamics, physiological constraints, cockpit interfaces and pilot monitoring patterns. This project investigates the addition of realistic human-related constraints derived from experimental pilot studies and geometrical constraints on the cost function used in the Kalman filter scheduling problem.Cognitive engineering analysis of an automated car2014-05-25T23:02:00-05:002014-05-25T23:02:00-05:00https://seb.mamessier.com/2014/05/25/work-analysis-automated-car<p>Here I summarize the final project I did together with Gabriel Gelman for the graduate class <code class="highlighter-rouge">Cognitive Engineering</code> taught by Dr Karen Feigh, Professer at Georgia Tech School of Aerospace. The idea was to apply several concepts we learned about work domain analysis, levels of automation and function allocation to the rising problem of self-driving cars. Let’s try to keep in mind that the final report was written in 2011 when Google and other companies’ efforts to create autonomous vehicles were still at the embryonic stage. We emphasize the need for a formal analysis of allocation of tasks between the driver and the <code class="highlighter-rouge">autopilot</code> in the transition phase which will take place in the upcoming year: neither the infrastructure nor the technology allows fully-automated vehicles to safely operator on every type of road. Although this project doesn’t attempt to tackle the technical challenges inherent to such automation, it’s still a good read as I believe cognitive aspects should have a more central position in the current debate about autonomous cars.</p>
<iframe src="/ViewerJS/#/docs/gatech/6551/report.pdf" width="100%" height="800" allowfullscreen="" webkitallowfullscreen=""></iframe>Sebastien MamessierHere I summarize the final project I did together with Gabriel Gelman for the graduate class Cognitive Engineering taught by Dr Karen Feigh, Professer at Georgia Tech School of Aerospace. The idea was to apply several concepts we learned about work domain analysis, levels of automation and function allocation to the rising problem of self-driving cars. Let’s try to keep in mind that the final report was written in 2011 when Google and other companies’ efforts to create autonomous vehicles were still at the embryonic stage. We emphasize the need for a formal analysis of allocation of tasks between the driver and the autopilot in the transition phase which will take place in the upcoming year: neither the infrastructure nor the technology allows fully-automated vehicles to safely operator on every type of road. Although this project doesn’t attempt to tackle the technical challenges inherent to such automation, it’s still a good read as I believe cognitive aspects should have a more central position in the current debate about autonomous cars.