MEAN Stack Demo App (UPDATE)

Last year, I created a simple MEAN Stack demo app and published the code on github and deployed it here. I haven’t had a lot of time to keep working on it since then, but this week I did manage to make some improvements to it.

The biggest change I made was that I converted the entire application to be a single page application, eliminating the need for server-side views rendered by node and instead relying entirely on Angular, even for authentication. The approach I settled on for authentication is to use JWT (JSON Web Tokens) as it allowed me to secure API requests without needing to use session or cookies. In fact, I was able to entirely remove session and cookie support from my server-side code because of this change. In a small app like this, it’s not a big deal, but for a larger app, not needing server-side session can improve scalability and simplify application deployment in multi server environments.

As part of the conversion to being a true single page application, I also made the decision to use UI Router. UI Router is better than Angular’s ngRoute in many ways, but the primary driving factor is its ability to nest views, which simplifies the page layout, allowing the header and footer of my page to be broken out into separate views.

Finally, I created a build process using Gulp. Gulp is something I was not too familiar with at the time I originally created the MEAN Stack demo app last year, but since that time, I’ve learned just how powerful it can be and how easy it makes things. I’m coming from a vantage point of having had some exposure to using MSBuild in .NET, so Gulp to be is just so much more powerful and more intuitive than MSBuild and I actually enjoy using it. The best part is it’s just code.

I hope to continue updating this app over time and hopefully not take so long this time, but we’ll see. I tentatively plan on making some UI improvements (away from the default bootstrap appearance), using SASS to compile into CSS, and adding a dashboard page with some charts and a few ways to visualize data.

Quick Tip: Automatically collapsing Bootstrap navbar after navigation in a single page application

While working on an Angular web application, and using the bootstrap collapsible navbar, I ran into an issue where after clicking on one of the items in the expanded navbar, it did not automatically collapse. The issue stems from the fact that bootstrap was not developed with single page applications in mind, where navigation does not involve a postback to the server.

As I often do when I encounter an issue like this, I checked StackOverflow and quickly found a question about this very problem: Hide Twitter Bootstrap nav collapse on click.

In this case, none of the proposed answers completely solved the problem. However, I was able to piece together my own solution using bits and pieces of some of the answers given. Executing this Javascript on document ready will make sure in a single page application that the Bootstrap navbar collapses properly after you click on a hyperlink contained within it:

$('.navbar-collapse a:not(.dropdown-toggle)').click(function(){
    $(this).parents('.navbar-collapse').collapse('hide');
});

SEO Checklist

As I mentioned in a previous post about knowledge panels, I was doing a little bit of SEO work recently. I found a lot of information and advice while I was researching tips on how to improve search engine rank. Because Google doesn’t reveal much about the criteria it uses to rank search results, and because they make changes over time to their ranking algorithm, it is difficult to know for certain which steps to take.

The best thing you can do for SEO is to produce high quality content that is targeted to answer specific questions that Google’s users might have. Google is trying to guess which site has the most appropriate answer for a question (a.k.a. search term) and you want to craft your site to make it easy for them to determine that your site has the best (and succinct) answer.

I’m not going to try to provide an all-encompassing list of SEO advice. After all, entire books are written on that subject every year. But I am going to provide a checklist of some non-content related activities you can do that may help improve your search engine ranking:

Use descriptive page titles

Make sure your page titles accurately describe the content of your page. The page title is what is used for the link text in Google search results, so it really stands out and might be the only thing a user actually reads as they scan down through a page of search results.

Use descriptive URLs

Having a descriptive URL is important because it also shows in the search results and can potentially be a factor in Google’s algorithm to determine what your page is about. Try to make your URL be human readable, a few words with dashes between the words is best.

Use meta tags

Meta “description” and “keywords” tags used to be critical to search engine ranking, but today they are less so. I don’t know how much of a role they play in Google’s algorithm, but the meta description tag does give you a way to specify which content is shown in the two line excerpt below your listing in search engine results. Without having a meta description tag, Google will try its best to come up with a suitable excerpt, but this is not always the most appropriate text for communicating what your page is about. So at a minimum make sure you have a short (30-50 word) meta description tag, and while you’re at it, add a meta keywords tag including a comma separated list of keywords that pertain to your page. Make sure the keywords truly pertain to your content however.

Make sure your site is mobile friendly

Google prefers pages that are mobile-friendly. It also displays a “mobile friendly” tag next to search results when you search from a mobile device. For this reason, and also in order to make sure your mobile visitors have a good experience, you should make sure your site is considered mobile friendly by Google. Google provides a Mobile Friendly Test Tool you can use to verify whether they consider your site to be mobile friendly or not. Bing also has a Mobile Friendliness Test Tool which is worth using as well.

Create a sitemap.xml

A Sitemap is an XML file that describes the structure of your website and is used by Google to crawl your site. All of your pages should be accessible by navigation somehow through your site, but having a sitemap will make sure none of your pages get missed during the crawl.

Sign up for Google Search Console

Sign up for Google Search Console. Add and verify your site and submit your sitemap. Verification involves either adding a custom DNS TEXT entry or adding a meta tag to your site.

Sign up for Bing Webmaster Tools

Sign up for Bing Webmaster Tools. Add and verify your site and submit your sitemap, just as you did with Google Search Console. Bing Webmaster Tools also has a feature called “Connected Pages” where you can tell Bing which social media accounts link up to your website, so if you have any social media accounts related to your site, you might as well link them here. While you’re in Bing Webmaster Tools, you might as well use the SEO Analyzer tool which can identify problems with your site that may be holding back SEO.

Analyze your page speed

Web users don’t like to wait a long time, and as such, Google prefers to show them pages that load quickly. Use Google’s PageSpeed Insights tool to get some feedback about the performance of your site. Keep in mind, this is how Google perceives the performance of your site, which may be different from the actual performance you perceive. I’ve found this tool to be pretty harsh on its ratings and you may not be able to fix all of the problems it identifies, but it does provide some good feedback and ideas on how you might be able to make your site load faster.

Sign up for Google Analytics

Google Analytics is a free tool you can use to further analyze the traffic that arrives at your site. While it may not do anything to directly improve your site’s position in search results, the information it provides can help you make better decisions about how well your content is ranking in search results and help give you ideas on how you can improve your content.

Check whether your site is included in Google’s and Bing’s indexes

A good way to quickly check what Google has in its index for your site is to use the special “site:” prefix when searching. So for example, if you go to Google and search for “site:blog.cinlogic.com” it will show you a result list of all of the indexed entries it has for blog.cinlogic.com. This is a simple way to confirm that indeed Google has indexed your site’s pages and a good way to check how your site’s title, URL, and description appear in the search results. If you don’t see pages you are expecting to see in this list, it is a red flag that maybe there is an error or some other problem preventing your site from being crawled by Google.

Submit URLs directly to Google and Bing

If your site is new, or if you don’t want to wait for Google to re-crawl your site after you’ve made changes, you can attempt to prompt Google to crawl a URL by submitting the URL directly to Google.
Bing has a similar way to submit your site.

Use HTTPS

Google has publicly said that using HTTPS can improve your search engine ranking. Just how much this is a factor is unclear, but given the low cost of HTTPS, it makes sense to use HTTPS for your site if you can.

Understanding the Google Knowledge Panel for Brands

Recently, I was helping a company with some SEO and as part of that effort, trying to get a knowledge panel to appear when you search for their brand in Google. While having a knowledge panel isn’t as important as producing high quality content to improve your search engine result position, it does reinforce to prospective customers that your brand is indeed legitimate. Besides, since Google’s ranking algorithm is not publicly known and changes over time, I wouldn’t be completely surprised if having a knowledge panel is considered a positive factor in the ranking algorithm, or could be at some point in the future.

So let me back up a second and tell you what I’m talking about when I say knowledge panel, because if you haven’t done SEO before or don’t pay close attention to the results that come back when you do a Google search, you may not know what that is. A knowledge panel is a specialized area of the search results that displays a summary and key facts about the specific entity you searched for. Here is the knowledge panel that appears when I search for Google using Google.

GoogleKnowledgePanelHighlighted

The first important thing to understand is there are multiple types of knowledge panels. I don’t know all of the potential types, but here are three main ones:

  • Brand – This one shown above for Google is a brand knowledge panel.
  • Person – This is very similar to a brand knowledge panel, but for a person, and appears a little differently.
  • Local Business – This is for a local store or restaurant and displays a map, reviews, hours of operation, etc.

What I’m going to be talking about are brand knowledge panels and it turns out they are much more difficult to achieve than some SEO blogs make them out to be, especially if you are a relatively new startup company trying to build your presence. I believe everything will also apply to person knowledge panels, but local business knowledge panels appear to be entirely different in the rules for whether they are displayed (and good news, if you want one for your local business, it seems to be much easier to obtain a local business knowledge panel than a brand knowledge panel).

Structured Data Markup

According to Google, using structured data markup on your public website can influence their decision about which social media profiles to display in your brand’s knowledge panel. The wording on their site about using structured data markup seems at first to imply that having this markup will tell them what they need to know to create a knowledge panel, but this does not seem to be the case and does not seem to have any bearing on whether Google chooses to display a knowledge panel in the first place or not. If it is a factor, then it alone is not enough to prompt the display of a knowledge panel by Google.

My conclusion is that having structured data markup in your public website can’t hurt, and might help fill out an incomplete knowledge panel, but don’t expect this to help you very much.

WikiData

Some of the advice I read about getting a brand knowledge panel said it was just a matter of creating an entry for your company in Freebase. However, in 2015, Freebase was shut down and is now only a read-only repository for historical purposes. The existing data (some or all, I’m not sure) has been migrated to Wikidata, which like Freebase, is also a publicly-editable repository of structured data. But Wikidata is much more restrictive about which entries it accepts into its database, and limits itself to only containing data for people and companies that meet its “notability” requirement. Without being considered notable, expect Wikidata editors to remove your entry within a matter of days.

So what criteria does Wikidata use to determine notability? They publish notability guidelines which essentially say that you have to have at least one valid sitelink to Wikipedia or other similar site that has editorial guidelines. Essentially they are deferring the job of determining notability over to the Wikipedia editors.

My conclusion for Wikidata is that it is impossible to get a Wikidata entry unless you first have a Wikipedia entry first. If you have a valid Wikipedia entry, and are considered notable by Wikipedia editors, then having a Wikidata entry can’t hurt and might even help fill in some missing data in the knowledge panel.

Wikipedia

So this brings me to Wikipedia. From looking at numerous knowledge panels for companies of various sizes, my determination is that Wikipedia is just about the only thing that matters in determining whether you can get a knowledge panel or not. I can’t say whether having a Wikipedia it is a sure-fire way to trigger a knowledge panel to be created for your brand, I can say that I have NEVER seen a brand knowledge panel that existed for something that did not also have a Wikipedia entry.

The rules around what makes an acceptable Wikipedia page are beyond what I’m going to cover, and besides, I have never authored a brand new Wikipedia page. I will leave that to someone else to cover, but I do know that your entry needs to be written in a factual manner, backed up by references to reputable news sources. With that in mind, if you are new startup company, your biggest barrier to being able to create a Wikipedia page for your business is to getting enough reputable third-party sources to write about you so that you can use them to eventually create your Wikipedia page. My suggestion would be to think about what your Wikipedia page might look like by checking out similar companies who have done it already, and then look at your group of reference sources and see what gaps you have and focus on trying to get coverage in those areas so that you will have enough sources to eventually build your Wikipedia page. Don’t expect this to be an overnight thing, but if you start early with this strategy then by the time you want to have a Wikipedia page, you will have everything you need.

My conclusion for Wikipedia is that it is critical to have a Wikipedia entry for your company if you want a brand knowledge panel to appear for you in the search results. If you do only one thing to try to get a knowledge panel, do this.

Summary

1. Create a Wikipedia page for your company.

That is probably all you really need to do, but if you want to go further, it can’t hurt to:

2. Create a Wikidata entry (must reference your Wikipedia entry) and fill in details about your business.
3. Add structured data markup to your public website

Visualize Data Using Google Charts and ASP.NET MVC

Visualizing data using charts is simpler than ever thanks to libraries like Google Charts.  I’m going to show how to construct a Google Chart from a collection of .NET objects in C#. Essentially all you need to do is take the collection of .NET objects, translate it into a class that matches the format expected by Google for chart data, and then Json.Encode that object in your view as part of a script to generate the chart.

The example takes this data…

new MarketSales[]
{
    new MarketSales() { Market = "Cincinnati", Year = 2013, TotalSales = 723898 },
    new MarketSales() { Market = "Cincinnati", Year = 2014, TotalSales = 898132 },
    new MarketSales() { Market = "Cincinnati", Year = 2015, TotalSales = 941823 },
    
    new MarketSales() { Market = "Columbus", Year = 2013, TotalSales = 509132 },
    new MarketSales() { Market = "Columbus", Year = 2014, TotalSales = 570913 },
    new MarketSales() { Market = "Columbus", Year = 2015, TotalSales = 460923 },
    
    new MarketSales() { Market = "Cleveland", Year = 2013, TotalSales = 753939 },
    new MarketSales() { Market = "Cleveland", Year = 2014, TotalSales = 830923 },
    new MarketSales() { Market = "Cleveland", Year = 2015, TotalSales = 910302 },
    
    new MarketSales() { Market = "Dayton", Year = 2013, TotalSales = 109012 },
    new MarketSales() { Market = "Dayton", Year = 2014, TotalSales = 400302 },
    new MarketSales() { Market = "Dayton", Year = 2015, TotalSales = 492901 }
}

… and uses it to create this chart:

Google Chart

I’ve published the source code for this example on Github. If you are creating this from scratch, the best way is to create a new ASP.NET Web Application project using the empty template and checking the box to add folders and core references for MVC. This will bring in the necessary dependencies without adding a whole bunch of starter code that is not needed for the example and give you a relatively clean starting point.

Here is the code for the example:

Controllers\HomeController.cs
The controller will retrieve the data (imagine that it is coming from a database, but really for this example it is just constructing an array of MarketSales objects) and then translates it into an object of type GoogleVisualizationDataTable, which serves the purpose of matching the format required by Google in order to create a DataTable for the chart to use.

using GoogleChartsExample.Models;
using System.Collections.Generic;
using System.Linq;
using System.Web.Mvc;

namespace GoogleChartsExample.Controllers
{
    public class HomeController : Controller
    {
        public ActionResult Index()
        {
            // Get the chart data from the database.  At this point it is just an array of MarketSales objects.
            var data = GetMarketSalesFromDatabase();

            return View(new SalesChartModel()
            {
                Title = "Total Sales By Market and Year",
                Subtitle = "Cincinnati, Cleveland, Columbus, and Dayton",
                DataTable = ConstructDataTable(data)
            });
        }

        private GoogleVisualizationDataTable ConstructDataTable(MarketSales[] data)
        {
            var dataTable = new GoogleVisualizationDataTable();

            // Get distinct markets from the data
            var markets = data.Select(x => x.Market).Distinct().OrderBy(x => x);

            // Get distinct years from the data
            var years = data.Select(x => x.Year).Distinct().OrderBy(x => x);

            // Specify the columns for the DataTable.
            // In this example, it is Market and then a column for each year.
            dataTable.AddColumn("Market", "string");
            foreach (var year in years)
            {
                dataTable.AddColumn(year.ToString(), "number");
            }

            // Specify the rows for the DataTable.
            // Each Market will be its own row, containing the total sales for each year.
            foreach (var market in markets)
            {
                var values = new List<object>(new[] { market });
                foreach (var year in years)
                {
                    var totalSales = data
                        .Where(x => x.Market == market && x.Year == year)
                        .Select(x => x.TotalSales)
                        .SingleOrDefault();
                    values.Add(totalSales);
                }
                dataTable.AddRow(values);
            }

            return dataTable;
        }

        private MarketSales[] GetMarketSalesFromDatabase()
        {
            // Let's pretend this came from a database, via EF, Dapper, or something like that...
            return new MarketSales[]
            {
                new MarketSales() { Market = "Cincinnati", Year = 2013, TotalSales = 723898 },
                new MarketSales() { Market = "Cincinnati", Year = 2014, TotalSales = 898132 },
                new MarketSales() { Market = "Cincinnati", Year = 2015, TotalSales = 941823 },

                new MarketSales() { Market = "Columbus", Year = 2013, TotalSales = 509132 },
                new MarketSales() { Market = "Columbus", Year = 2014, TotalSales = 570913 },
                new MarketSales() { Market = "Columbus", Year = 2015, TotalSales = 460923 },

                new MarketSales() { Market = "Cleveland", Year = 2013, TotalSales = 753939 },
                new MarketSales() { Market = "Cleveland", Year = 2014, TotalSales = 830923 },
                new MarketSales() { Market = "Cleveland", Year = 2015, TotalSales = 910302 },

                new MarketSales() { Market = "Dayton", Year = 2013, TotalSales = 109012 },
                new MarketSales() { Market = "Dayton", Year = 2014, TotalSales = 400302 },
                new MarketSales() { Market = "Dayton", Year = 2015, TotalSales = 492901 }
            };
        }
        
    }
}

Models\GoogleVisualizationDataTable.cs
This is a model class used to get data into the format needed to create a DataTable. It has a few helper methods to simplify adding columns and rows.

using System.Collections.Generic;
using System.Linq;

namespace GoogleChartsExample.Models
{
    // This class is used to facilitate JSON serialization into the format required by Google to create a DataTable.
    // See https://developers.google.com/chart/interactive/docs/reference#DataTable
    public class GoogleVisualizationDataTable
    {
        public IList<Col> cols { get; } = new List<Col>();
        public IList<Row> rows { get; } = new List<Row>();

        public void AddColumn(string label, string type)
        {
            cols.Add(new Col() { label = label, type = type });
        }

        public void AddRow(IList<object> values)
        {
            rows.Add(new Row() { c = values.Select(x => new Row.RowValue() { v = x }) });
        }

        public class Col
        {
            public string label { get; set; }
            public string type { get; set; }
        }

        public class Row
        {
            public IEnumerable<RowValue> c { get; set; }
            public class RowValue
            {
                public object v;
            }
        }
    }
}

Models\MarketSales.cs
This is a model class used to represent the source data, as if it came from Entity Framework or other ORM.

namespace GoogleChartsExample.Models
{
    public class MarketSales
    {
        public string Market { get; set; }
        public int Year { get; set; }
        public decimal TotalSales { get; set; }
    }
}

Models\SalesChartModel.cs
This is the model we bind to the view, containing the title and subtitle of the chart and the datatable data used to construct the chart.

namespace GoogleChartsExample.Models
{
    public class SalesChartModel
    {
        public string Title { get; set; }
        public string Subtitle { get; set; }
        public GoogleVisualizationDataTable DataTable { get; set; }
    }
}

Views\Home\Index.cshtml
This view is what actually builds up the chart and this code is mostly similar to Google’s example code. The main difference is we’re using binding to take our server-side data and put it into client-side JavaScript so that it can be used by Google Charts.

@model GoogleChartsExample.Models.SalesChartModel
<!DOCTYPE html>

<html>
<head>
    <meta name="viewport" content="width=device-width" />
    <title>Index</title>
    <script type="text/javascript" src="https://www.gstatic.com/charts/loader.js"></script>
</head>
<body>
    <div> 
        <div id="chart" style="width: 500px; height: 300px;"></div>
        <script type="text/javascript">
            google.charts.load('current', { 'packages': ['corechart', 'bar'] });

            google.charts.setOnLoadCallback(function () {
                var title = '@Model.Title';
                var subtitle = '@Model.Subtitle';
                var dataTable = new google.visualization.DataTable(
                    @Html.Raw(Json.Encode(Model.DataTable))
                );

                drawBarChart('chart', title, subtitle, dataTable);                
                //drawColumnChart('chart', title, dataTable);
            });

            function drawBarChart(elementId, title, subtitle, dataTable) {
                var options = {
                    chart: {
                        title: title,
                        subtitle: subtitle
                    }
                };
                var chart = new google.charts.Bar(document.getElementById(elementId));
                chart.draw(dataTable, options);
            }

            function drawColumnChart(elementId, title, dataTable) {
                var options = {
                    title: title
                };
                var chart = new google.visualization.ColumnChart(document.getElementById(elementId));
                chart.draw(dataTable, options);
            }
        </script>
    </div>
</body>
</html>

I hope this example gives you some ideas about how you can use Google Charts for your ASP.NET MVC applications.

Starting and Stopping AWS EC2 Instances Using the AWS Command Line Interface

You can start and stop AWS (Amazon Web Services) EC2 instances using the command line interface instead of the AWS Console website.  This is helpful if you want to programmatically start and stop instances, in cases where you do not want to leave an instance running constantly but want to be able to bring it up and shut it down on demand or on a schedule.  In this blog post I’m going to show the basics of how to do this.

 

Creating a User with an Access Key

In order to use the AWS command line interface, you will first need to create a user account with an access key.  This is different from the account you use to log into AWS console.  This is an account is used to execute aws commands and can be granted fine-grained permissions.  To create a new user:

Login to the AWS IAM Console (Identity and Access Management Console)

Click on Users and then click on the Create New Users button.  Enter the name of the user to create (testuser in this example), make sure Generate an access key for each user is checked, and then click the Create button.

CreateUsers

Next, you will see a confirmation that the user was created. Click to expand Show User Security Credentials.  Make sure you copy the Access Key ID and Secret Access Key.  This is the only time you will be able to see the secret access key and if you don’t copy it or lose it, you will have to regenerate the access key!

UserCreated2

 

Assigning Permissions to the Newly Created User

When a new user is created, they have no permissions, so you will need to assign them.  Permissions are assigned by attaching policies to the user, or by adding the user to a group that has policies attached to it.  In this example, because we only have a single user, I am going to attach a policy directly to the user that was created.

To attach a policy to the user in the IAM Console, select Users again, click on the user we just added (testuser) and then switch to the Permissions tab.

UserPermissions

Now click on the Attach Policy button, select one or more policies and click the Attach Policy button.  In this example, we’re just going to attach the AmazonEC2FullAccess policy to this user.

AttachPolicy

Now we have a user that can be used to make command line calls.

 

Installing the AWS Command Line Interface

Next, we will need to install the AWS Command Line Interface.  I’m using Ubuntu in this example, and the instructions for installing the CLI are located here.  Amazon also provides instructions for installing this on other platforms, such as Windows.  Basically, you will need to have Python installed, Unzip installed, and then execute the following commands to download and install the AWS CLI:

$ curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
$ unzip awscli-bundle.zip
$ sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws

 

Using the AWS Command Line Interface

In order to use the AWS Command Line Interface, we need to specify our credentials (the access key and secret key we generated earlier). There are two ways you can do this: First, by running the aws configure command which will create a ~/.aws/credentials file, or Second, by setting environment variables. There is actually a third way to specify credentials, using IAM Roles, but that is specific to using the CLI from another EC2 instance.

In this example, I’m just going to set the credentials using environment variables. I’m also going to set a default region so that I don’t need to specify the region on each command. Replace the XXXXXXs with your generated Access Key ID and Secret Access Key.

$ export AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXX
$ export AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
$ export AWS_DEFAULT_REGION=us-east-1

 
Now you can start using the CLI.

Here is a command to check the status of an EC2 instance (is it running, stopped, stopping, etc). I’m using grep and awk to pull out just the text that says “stopped” or “running”. Make sure to put your own instance ID in this command instead of i-XXXXXXXX:

$ aws ec2 describe-instances --instance-ids i-XXXXXXXX --output text | grep -w STATE | awk '{print $3}'

Here are the commands to start and stop an instance.

To start an instance:

$ aws ec2 start-instances --instance-ids i-XXXXXXXX --output text | grep -w CURRENTSTATE | awk '{print $3}'

To stop an instance:

$ aws ec2 stop-instances --instance-ids i-XXXXXXXX --output text | grep -w CURRENTSTATE | awk '{print $3}'

Keep in mind that the commands return immediately, so if you start an instance, the initial state returned will be “pending” until it has finished starting and then will change to “running”. Similarly, when you stop an instance, the initial state returned will be “stopping” and will change to “stopped” once it has finished stopping.

I hope this post has been a helpful introduction of how to use the AWS Command Line Interface to do some simple management of EC2 instances.

Running QCL (Quantum Computation Language) on Ubuntu

Yesterday, I posted how to run QCL (Quantum Computation Language) on Windows. Today, I’ll show how to install and run it on Ubuntu (I’m using Ubuntu 15.04 running on an Azure VM). QCL does have a binary distribution, but I will be compiling it from the source code.

Since my approach to running QCL on Windows was to use Cygwin, the process is similar, with a few differences.

From an Ubuntu command prompt, download and extract the QCL source code.

$ wget http://tph.tuwien.ac.at/~oemer/tgz/qcl-0.6.4.tgz
$ tar xvzf qcl-0.6.4.tgz

Next, I discovered that I needed to install libplot-dev, otherwise I got an error message during compilation saying “fatal error: plotter.h: No such file or directory”. I also found that I needed to install flex because I got a compile error saying “/usr/bin/ld: cannot find -lfl”, and libncurses-dev, because I got an error saying “/usr/bin/ld: cannot find -lncurses”.

You can install these dependencies using apt-get, like this:

$ sudo apt-get update
$ sudo apt-get install flex libncurses-dev libplot-dev libreadline-dev

Now change to the qcl folder and compile it:

$ cd qcl-0.6.4
$ make

Now you should be able to run qcl…

$ ./qcl

…and then get the QCL command prompt, which will look something like this:

QCL Quantum Computation Language (64 qubits, seed 1450711245)
[0/64] 1 |0>
qcl>

Running QCL (Quantum Computation Language) on Windows

According to a 2014 PC World article, a D-Wave Quantum Computer costs over $10 million, but you can learn to program a quantum computer without actually having a quantum computer by using QCL (Quantum Computation Language). QCL is one of the first quantum computer programming languages and it simulates a quantum computer, allowing you to implement and simulate quantum algorithms without the need for an actual quantum computer.

QCL can run on various operating systems, but I’m going to show how to run it on Windows (I am using Windows 10) using Cygwin.

The first thing you need to do is download and run the Cygwin installer from https://cygwin.com.  During the installation, select the following packages:

  • Under Devel…
    • bison
    • flex
    • gcc-g++
    • libreadline-devel
    • make
  • Under Graphics…
    • libplotter-devel
  • Under Libs…
    • libncurses-devel
    • libX11-devel
    • libXt-devel
  • Under Web…
    • wget

Once Cygwin is installed, run it from where you installed it (for example, C:\cygwin64\cygwin.bat).

At the Cygwin command prompt, execute the following commands (change the QCL version number if you are using a different version):

To download and extract the QCL source code:

$ wget http://tph.tuwien.ac.at/~oemer/tgz/qcl-0.6.4.tgz
$ tar xvzf qcl-0.6.4.tgz

To compile the QCL source code:

$ cd qcl-0.6.4
$ make

To run the QCL interpreter/simulator:

$ ./qcl

When QCL runs, you should see something like this and get the QCL prompt:

QCL Quantum Computation Language (64 qubits, seed 1450665178)
[0/64] 1 |0>
qcl>

That’s it! I’m not going to cover how to program in QCL at this time (I’m still learning) but I hope this helps you get QCL up and running on your Windows system.

Simple web scraping using Python

I just posted a GitHub gist showing how to use python to scrape a web page to extract a product price.

As you can see, web scraping is pretty simple, but there are some challenges you need to keep in mind:

  • First, the page you are scraping may change without notice, causing your code to fail when it can no longer find the information it was attempting to find.
  • Second, if you request too many pages from a site or request them too frequently, the site you are scraping may start blocking or denying your requests.
  • Finally, depending on the site you are scraping, automated scraping may violate the terms of service.

Here is the python code to scrape a product web page example:

from bs4 import BeautifulSoup
from urllib2 import Request, urlopen
import decimal

def findPrice(url, selector):
	userAgent = "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36"
	req = Request(url, None, {'User-Agent': userAgent})
	html = urlopen(req).read()	
	soup = BeautifulSoup(html, "lxml")
	return decimal.Decimal(soup.select(selector)[0].contents[0].strip().strip("$"))

print findPrice("https://cdn.rawgit.com/brianpursley/661071c026b9bf130971/raw/94a914d15e977150b531c5c44cbee1545f9e70f0/example-scrape-target.html", "#priceRow > div:nth-of-type(2)")

This code requires python and Beautiful Soup in order to run.

On Ubuntu you can use the following commands to install the dependencies, get the python script from the gist, and run it:

$ apt-get install -y wget python python-bs4

$ wget https://gist.githubusercontent.com/brianpursley/c0c56b03f8e0095f77db/raw/f5133f2569da75f070f5a871118d0c1e76dffce0/scrape.py

$ python scrape.py

Troubleshooting Cisco AnyConnect VPN Client connection problem after upgrading to Windows 10

I’ve been putting off upgrading from Windows 7 (I never made the jump to Windows 8) to Windows 10, because I’ve been working on some important projects and I didn’t want to have to troubleshoot anything while I was trying to get my normal work done.  But today, I decided to go ahead and bite the bullet, and upgrade to Windows 10.

One problem I had was the Cisco AnyConnect VPN Client (version 2.5.2014) that I use to remotely connect to one of my customer’s networks was suddenly unable to connect after the upgrade.

When trying to connect, I got an error saying only:  AnyConnect was not able to establish a connection to the specified secure gateway.  Please try connecting again.

Image

In Windows Event Viewer, I also saw several critical errors for the VPN client, the most descriptive of which was:

Function: CVAMgr::~CVAMgr
File: .VAMgr.cpp
Line: 151
Invoked Function: CVAMgr::disable
Return Code: -32964594 (0xFE09000E)
Description: VAMGR_ERROR_CVIRTUALADAPTER_FAILED

Googling the error message and the description from event viewer, I came across these two discussion threads that seem to be talking about the same problem I was having (only for Windows 8).

http://www.eightforums.com/network-sharing/4001-cisco-anyconnect.html

https://social.msdn.microsoft.com/Forums/en-US/6fe817f3-27fe-4068-995a-aced4508ee3e/windows-8-and-cisco-vpn?forum=windowsdeveloperpreviewgeneral

Hoping that the information would also apply to Windows 10, I decided to follow the suggestions.

Using Regedit, and after some investigation, I found that a registry value needed by the Cisco VPN client had become “messed up” somehow (I assume during the Windows 10 upgrade, since the VPN client worked fine yesterday).

The key affected was HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\vpnva\DisplayName

It had a value of:

@oem5.inf,%vpnva_Desc%;Cisco AnyConnect VPN Virtual Miniport Adapter for Windows x64

But should be just this instead (removing the preceeding @oem5.inf,%vpnva_Desc%;)…

Cisco AnyConnect VPN Virtual Miniport Adapter for Windows x64

Image

So now, I tried to connect again using the Cisco VPN client, but I STILL WAS UNABLE TO CONNECT!

The error message shown was the same, but I checked Event Viewer, and this time, I saw some different error messages being logged (URLs removed)

Function: CManifestMgr::GetFile
File: .ManifestMgr.cpp
Line: 461
Invoked Function: CFileDownloader::DoDownload
Return Code: -16842742 (0xFEFF000A)
Description: unknown
Failed to download from https://XXXXXXXXXXXXXX/CACHE/stc/1/VPNManifest.xml to C:UsersbpursleyAppDataLocalTemp32584.tmpVPNManifest.xml

Function: CManifest::GetManifest
File: .Manifest.cpp
Line: 245
Invoked Function: CManifest::GetManifest
Return Code: -33554423 (0xFE000009)
Description: GLOBAL_ERROR_UNEXPECTED
Failed to get manifest from https://XXXXXXXXXXXXXX/CACHE/stc/1/VPNManifest.xml

Function: CManifestMgr::ProcessManifests
File: .ManifestMgr.cpp
Line: 672
Invoked Function: GetManifest
Return Code: 0 (0x00000000)
Description: Failed to get main manifest

Function: ConnectMgr::launchCachedDownloader
File: .ConnectMgr.cpp
Line: 5234
Invoked Function: ConnectMgr :: launchCachedDownloader
Return Code: 1 (0x00000001)
Description: Cached Downloader terminated abnormally

Function: ConnectMgr::processIfcData
File: .ConnectMgr.cpp
Line: 2164
Invoked Function: ConnectMgr::initiateTunnel
Return Code: -33554423 (0xFE000009)
Description: GLOBAL_ERROR_UNEXPECTED

Function: CTransportWinInet::SendRequest
File: .CTransportWinInet.cpp
Line: 1011
Invoked Function: HttpSendRequest
Return Code: 12045 (0x00002F0D)
Description: The certificate authority is invalid or incorrect

Aha!  Now it is telling me there is a certificate problem.

So my next step was to browse to the URL from the log where it said it failed to download the file from (remember, I removed the address from the URL)…

https://XXXXXXXXXXXXXX/CACHE/stc/1/VPNManifest.xml

This showed me that there indeed was a problem with the server’s certificate.  I’m guessing it was a self-signed certificate or something like that.

Again, this is not my server and I don’t control the VPN configuration, but have been connecting to it for a couple of years and know the company who owns this server and trust the identity of this server.  So my next step was to configure my computer to trust this server’s certificate.  Here is how I did that.

Using Chrome (you should be able to do this with other browsers, though), I browsed to the URL, right-clicked on the lock icon to the left of the https in Chrome’s URL bar, clicked on the “Certificate information” hyperlink, and switched to the Details tab.

Image

From there, I clicked the Copy to File button and saved the certificate as type “DER Encoded Binary X.509 (*.cer)”.  It doesn’t matter what name you give it, just save it somewhere you can find it later, because we’re going to use that file in the following steps.  I just gave mine a name of MyCert.cer.

Image

Next, I ran certmgr (Windows+R and run certmgr.msc) and drilled down into Trusted Root Certificate Authorities and then into Certificates…

Image

I right-clicked on Certificates and chose All Tasks, and then Import, which brought up the Certificate Import Wizard.  Here, I chose the certificate file I saved earlier and clicked Next.

Image

On the next page, I just left it set to Trusted Root Certificate Authorities, clicked Next again, and then clicked Finish.

Image

After clicking Finish, I got a security warning saying Windows cannot validate that the certificate is actually from the server it claims to be from, and it asked me to confirm whether I wanted to install this certificate.  I chose Yes, and it installed the certificate.

Image

I opened up the Cisco AnyConnect VPN Client and tried to connect again.  This time it was successful!

Hopefully this will help someone else who might be dealing with problems using the Cisco AnyConnect VPN Client after upgrading to Windows 10.