Running Elasticsearch and Kibana locally using Docker

In this post, I will show how to run ElasticSearch and Kibana in Docker containers on your local machine, which can be helpful when you need to set up a quick test environment.

References

Most of what I’m about to describe came from the following ElasticSearch and Kibana references. Version 6.4 is the current version as of the date of this blog post.

Prerequisites

Defining Docker Containers

docker-compose.yml
version: '3'

services:

  elasticsearch:
    container_name: elasticsearch
    image: docker.elastic.co/elasticsearch/elasticsearch:6.4.0
    volumes:
      - esdata:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    environment:
      - discovery.type=single-node

  kibana:
    container_name: kibana
    image: docker.elastic.co/kibana/kibana:6.4.0
    ports:
      - 5601:5601

volumes:

  esdata:
    driver: local

This file defines two docker containers, one for Elasticsearch, and another for Kibana. Once you have this file, you can run docker-compose up to start Elasticsearch and Kibana in Docker on your machine. It might take a minute for the containers to fully launch, but once they do, you should be able to open a browser and navigate to Kibana using http://localhost:5601.

Loading Some Sample Data

The Kibana user guide provides a tutorial with some sample data sets. We can use the following scripts to download the sample data sets and load them into Elasticsearch and created indexes for them.

One thing to note, since this is a non-production environment, and in order to keep things as simple as possible, I set the number of replicas to 0 (the default is 1) for each of the indexes. This keeps Elasticsearch from reporting the health of the indexes as yellow, since there is no other Elasticsearch node to replicate with.

load-bank
#!/bin/bash
temp=$(mktemp -d)
cd $temp
curl -s -X PUT "localhost:9200/bank" -H 'Content-Type: application/json' -d'{"settings":{"number_of_replicas":0}}'
curl -s -O https://download.elastic.co/demos/kibana/gettingstarted/accounts.zip
unzip accounts.zip
curl -s -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/bank/account/_bulk?pretty' --data-binary @accounts.json > /dev/null
cd - > /dev/null
rm -r $temp
load-logstash
#!/bin/bash
temp=$(mktemp -d)
cd $temp
curl -s -X PUT "localhost:9200/logstash-2015.05.18" -H 'Content-Type: application/json' -d'{"settings":{"number_of_replicas":0},"mappings":{"log":{"properties":{"geo":{"properties":{"coordinates":{"type":"geo_point"}}}}}}}'
curl -s -X PUT "localhost:9200/logstash-2015.05.19" -H 'Content-Type: application/json' -d'{"settings":{"number_of_replicas":0},"mappings":{"log":{"properties":{"geo":{"properties":{"coordinates":{"type":"geo_point"}}}}}}}'
curl -s -X PUT "localhost:9200/logstash-2015.05.20" -H 'Content-Type: application/json' -d'{"settings":{"number_of_replicas":0},"mappings":{"log":{"properties":{"geo":{"properties":{"coordinates":{"type":"geo_point"}}}}}}}'
curl -s https://download.elastic.co/demos/kibana/gettingstarted/logs.jsonl.gz | gunzip > logs.jsonl
curl -s -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/_bulk?pretty' --data-binary @logs.jsonl > /dev/null
cd - > /dev/null
rm -r $temp
load-shakespeare
#!/bin/bash
temp=$(mktemp -d)
cd $temp
curl -s -X PUT "localhost:9200/shakespeare" -H 'Content-Type: application/json' -d'
{
 "settings": {"number_of_replicas":0},
 "mappings": {
  "doc": {
   "properties": {
    "speaker": {"type": "keyword"},
    "play_name": {"type": "keyword"},
    "line_id": {"type": "integer"},
    "speech_number": {"type": "integer"}
   }
  }
 }
}'
curl -s -O https://download.elastic.co/demos/kibana/gettingstarted/shakespeare_6.0.json
curl -s -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/shakespeare/doc/_bulk?pretty' --data-binary @shakespeare_6.0.json > /dev/null
cd - > /dev/null
rm -r $temp

Using these scripts, we can download some sample data and load it into our Elasticsearch instance using ./load-bank, ./load-logstash, and ./load-shakespeare.

You can verify the newly created indexes using curl -X GET "localhost:9200/_cat/indices?v" or by opening a browser and navigating to http://localhost:5601/app/kibana#/management/elasticsearch/index_management where you should see something like this:

Resetting the environment

All of the data from your dockerized Elasticsearch instance is stored in a docker volume called esdata. This volume will be persisted even after you stop and restart Docker. If you want to start from a clean slate you can run docker-compose down -v which will delete the esdata volume. The next time you run docker-compose up the volume will be recreated empty.

Conclusion

In this post, I showed how you can run Elasticsearh and Kibana locally using Docker. While the configuration I presented is not suitable for a production environment, it does offer a simple way to quickly spin up an environment for testing or experimentation.

Deploying an ASP.NET Core Web Application to Ubuntu

In this post I’m going to walk through the steps needed to deploy and host an ASP.NET Core Web Application on Ubuntu.

Install ASP.NET Core

First, after you provision your server (I’m using an AWS nano instance running Ubuntu 16.04), login using SSH and install ASP.NET Core. To do that, follow the instructions on Microsoft’s Get Started with .NET Core page.

sudo sh -c 'echo "deb [arch=amd64] https://apt-mo.trafficmanager.net/repos/dotnet-release/ xenial main" > /etc/apt/sources.list.d/dotnetdev.list'
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 417A0893
sudo apt-get update

And then…

sudo apt-get install -y dotnet-dev-1.0.4

Create a folder for your web app

Next, create a folder for your web application under /var/www. In this blog post, I will be deploying an application called WebApplication1.

sudo mkdir /var/www
sudo mkdir /var/www/WebApplication1

Now copy your files to /var/www/WebApplication1.

And then set the ownership of WebApplication folder all all of its contents to www-data.

sudo chown -R www-data:www-data /var/www/WebApplication

Run the web app as a service

At this point, we could run our web application by using the command dotnet /var/www/WebApplication1/WebApplication1.dll, but if it crashes or if the server is rebooted, it will be down. Instead, we want to run the web application as a service, automatically when the server is started. In Windows, we used IIS to run our web applications and it took care of making sure they started when IIS started and restarting them if they crashed. With ASP.NET Core, IIS is no longer required because web applications have an in-process web server (Kestrel). But we still need a way to launch our web application when the server starts and keep it running. In Ubuntu, we can use systemd for that.

To use systemd, we first need to define our web application as a service. We do that by creating a file under the /etc/systemd/system folder.

sudo nano /etc/systemd/system/WebApplication1.service

And in that file, add the following content:

[Unit]
Description=WebApplication1

[Service]
WorkingDirectory=/var/www/WebApplication1
ExecStart=/usr/bin/dotnet /var/www/WebApplication1/WebApplication1.dll
Restart=always
RestartSec=10
SyslogIdentifier=WebApplication1
User=www-data
Environment=ASPNETCORE_ENVIRONMENT=Production

[Install]
WantedBy=multi-user.target

After saving that, enable the service to be started when the server is started, and then start the service.

sudo systemctl enable WebApplication1
sudo systemctl start WebApplication1

Install and configure nginx

At this point, our web application is running on the default port which is 5000. We probably want it to run on a different port though, such as port 80. We may also want to configure things such as SSL or Gzip compression. In Windows, again, we used IIS for things such as these, but in Ubuntu, we can use nginx.

To use nginx, first we need to install it.

sudo apt-get install -y nginx

Next, we need to define our site by creating a file in the /etc/nginx/sites-available folder.

sudo nano /etc/nginx/sites-available/WebApplication1

Add the following content to the file:

server {
	listen 80;
	
	access_log /var/log/nginx/WebApplication1.access.log;
	error_log /var/log/nginx/WebApplication1.error.log;

	location / {
		proxy_pass http://127.0.0.1:5000;
		proxy_http_version 1.1;
		proxy_set_header Upgrade $http_upgrade;
		proxy_set_header Connection 'upgrade';
		proxy_set_header Host $host:$server_port;
		proxy_cache_bypass $http_upgrade;
	}
}

This is just a very basic nginx site configuration file and doesn’t include things like compression or SSL, but this file is where you would configure these things.

Now, we need to make a symlink to this file in the /etc/nginx/sites-enabled folder.

sudo ln -s /etc/nginx/sites-available/WebApplication1 /etc/nginx/sites-enabled/WebApplication1

And remove the default site that comes with nginx.

sudo rm /etc/nginx/sites-enabled/default

And finally restart the nginx service.

sudo service nginx restart

Finishing up

That’s it. The only thing left to do is check to make sure your security group allows port 80 inbound if you are using AWS EC2 so that you can reach the server using a web browser.

New Angular 4 Dashboard Demo Site

A while ago, I created a time sheet demo site using AngularJS (1.x). Now I’ve finally gotten around to creating a different demo site using Angular 4. This demo site is a dashboard. The dashboard is configurable with any number and combination of items, but the data is manually set (using a built-in form) instead of pulling automatically from various sources like you would probably want a real-life dashboard to function. Since the dashboard is just for demo purposes, I didn’t really worry about trying to take it that far. It should not be too much of a stretch, however, to change the back-end to populate the dashboard data automatically from external sources.

Besides Angular, the dashboard also uses Chartist to draw gauges and background charts of time-series data. The back end is an ASP.NET Core (C#) Web API, using JSON web tokens (bearer tokens) for authentication.

For storage, again since this is a demo, I went with something simpler than a database and used a library I created called FilePersist which simply serializes the dashboard object to a file in a directory. FilePersist was inspired by the node-persist library for Node.js.

Here are some screenshots showing the functionality of the dashboard

Getting an AWS Lambda Function to work with an API Gateway Trigger

AWS Lambda is Amazon’s way to allow you to write functions that can run without having to provision an EC2 instance. One way that these can be triggered is through an HTTP request using the API Gateway. But out of the box, if you try to do this you will get an HTTP Status code of 502, and if you investigate further using the API Gateway test functionality you will see an error message saying “Execution failed due to configuration error: Malformed Lambda proxy response”

The problem is that there is an extra requirement in place for Lambda functions that are using API Gateway as a trigger. They must return a specifically formatted JSON result as indicated at the bottom of this web page

Unfortunately the default “Hello World” code you are given looks like this, which does not conform to the required JSON format:

exports.handler = (event, context, callback) => {
    // TODO implement
    callback(null, 'Hello from Lambda');
};

You can get it to work by changing the callback to return an object instead of a string, like this:

exports.handler = (event, context, callback) => {
    callback(null, {
        'statusCode': 200, 
        'headers': {}, 
        'body': 'Hello from Lambda'
    });
};

There is one more thing that you may need to watch out for. If you are trying to execute your lambda function from client-side code in a web browser, you might also run into an error on the client-side where you get an HTTP status code of 403 and message saying No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'null' is therefore not allowed access. The response had HTTP status code 403.

You can get around this problem by enabling CORS on your API Gateway resource.

Detecting iBeacons on Android/iOS using Ionic 2

Ionic 2 was released on Jan 25 and I wanted to try it out on a simple demo application. Since I’ve recently been experimenting with iBeacons, I decided to make an app that can detect iBeacons that are found within a configurable range.

Ionic is a cross-platform mobile app development framework built on Cordova and AngularJS. Ionic 2 is a pretty big change from the previous version due in part to its use of Angular 2 which itself is a huge change from Angular 1.x.

The demo app I created is very simple, having only two screens as you can see from the screenshots below, but it was enough to give me a chance to create some custom providers as well as to use the built-in Storage provider.

The source code for this demo app is available on GitHub.

How To Automatically Zip ASP.NET File System Publish Output

If you use the File System publish method to publish an ASP.NET web application, you will get the output in a folder you choose. But what if you want the output in a zip compressed folder instead? I got tired of manually zipping the output folder, so I added a step to the end of the publish process to automatically compress the output folder into a zip file.

I did this by adding a custom target to my project’s pubxml file. The custom target executes the compress-archive powershell command (see highlighted section below):

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <WebPublishMethod>FileSystem</WebPublishMethod>
    <LastUsedBuildConfiguration>Debug</LastUsedBuildConfiguration>
    <LastUsedPlatform>Any CPU</LastUsedPlatform>
    <SiteUrlToLaunchAfterPublish />
    <LaunchSiteAfterPublish>True</LaunchSiteAfterPublish>
    <ExcludeApp_Data>False</ExcludeApp_Data>
    <publishUrl>C:\Publish\SimpleExample</publishUrl>
    <DeleteExistingFiles>True</DeleteExistingFiles>    
  </PropertyGroup>
  <Target Name="ZipPublishOutput" AfterTargets="GatherAllFilesToPublish">
    <Exec Command='powershell -nologo -noprofile -command "compress-archive -force -path $(WPPAllFilesInSingleFolder)\* -destinationpath $(publishUrl).zip"' />
  </Target>
</Project>

Benchmarking Amazon AWS vs Microsoft Azure for Low-End Windows Virtual Machines

Yesterday I benchmarked low-end Linux virtual machines using AWS and Azure. Today I’m going to follow up with another benchmark of AWS and Azure virtual machines running Windows Server 2012 R2 DataCenter edition.

 

Methodology

For this test, I chose to benchmark the lowest-end virtual machines for AWS and Azure, as well as another slightly more powerful, but still low-end virtual machine for each. I created each virtual machine using the default settingsfor Windows Server 2012 R2 DataCenter edition and did not attempt to perform any optimizations. After launching the VM, I used Server Manager to enable the Application Server Role and install .NET Framework 3.5 feature, which is a prerequisite to install Novabench, the benchmarking software I will use to measure performance. Then I ran Windows Update to make sure each instance had the latest updates installed and rebooted the machine. When the machine came back up, I remoted in again and installed and ran Novabench.

 

Specs

Instance Type CPU Memory (GiB) Price per Hour (USD)
AWS Nano (t2.nano) 1 vCPU 0.50 0.0082
AWS Small (t2.small) 1 vCPU 2.00 0.0320
Azure Basic A0 1 Core (Shared) 0.75 0.0180
Azure Standard D1 1 Core 3.50 0.1400

Price shown is the listed hourly price for the virtual machine only (It is the on-demand price for AWS) as of 2/27/2017 and does not include any storage or bandwidth costs.
 

Results

Instance Type CPU Score RAM Speed (MB/s) Disk Write Speed (MB/s)
AWS Nano 162 10,959 81
AWS Small 162 8,166 151
Azure Basic A0 2,068 37 28
Azure Standard D1 8,473 126 110

CPU Score Comparison

RAM Speed Comparison

Disk I/O Comparison

I did something a little different in this blog post than I did in yesterday’s post when it comes to displaying Price/Performance. I found that the AWS Nano instance offered the best price/performance ratio and then normalized the other price/performance ratios based on that, so the Nano instance will be 1.0 and all others relative to that. I’m not really sure that is the best way to show it or not, but I was trying to find something that would work for showing all three data points (CPU, RAM, and Disk) on the same bar chart and that is what I came up with.

Price/Performance Comparison
 

Data

In case you are interested, here is a screen capture of the Novabench output for each benchmarking session:

 

Conclusion

Just like last time, when I benchmarked low-end Linux virtual machines, AWS ended up performing better and doing so at a lower overall cost. In my previous blog post, I indicated a few reasons I thought might explain the difference in performance (SSD vs HDD and CPU Bursting) and I believe those also play a role in the differences here.

While my benchmarking process is by no means the most scientific and is limited to only low-end instances, I hope that it provides a glimpse into the performance differences you might find when choosing between AWS and Azure for hosting a virtual machine.

Benchmarking Amazon AWS vs Microsoft Azure for Low-End Linux Virtual Machines

When you create a virtual machine in AWS or Azure, you are giving some basic specs to choose from, but how do you really know what you are getting performance-wise, and how does a VM from one service compare to another? In this post I’m going to attempt to benchmark the performance of low-end Amazon AWS EC2 and Microsoft Azure virtual machines. My goal is to find out which is the best value.
 

Methodology

For the comparison, I chose to use the two least expensive virtual machine instance types from both Microsoft and Amazon. I created each of the instances using the default values, with the exception of disk size, which I increased to 20 GiB for AWS instances. I ran sysbench on a freshly provisioned Ubuntu 16.04 LTS virtual machine. No tuning or optimization was done by me. To keep it as simple as possible, I only ran the benchmark one time (on Sunday, Feb 26, 2017, at around 12:00PM ET). I realize that a temporary condition might affect the results, but I did go back and re-run the benchmarks again to make sure there was no significant difference in the results.

After creating each instance, I logged in via SSH and executed the following commands:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install sysbench
sudo reboot

After rebooting, I logged back in and executed the following commpands:

lsb_release -a
df
sysbench --test=cpu --cpu-max-prime=20000 run
sysbench --test=fileio --file-total-size=5G prepare
sysbench --test=fileio --file-total-size=5G --file-test-mode=rndrw --init-rng=on --max-time=300 --max-requests=0 run
sysbench --test=fileio --file-total-size=5G cleanup
cat /proc/cpuinfo
cat /proc/meminfo

 

Specs

Instance Type CPU Memory (GiB) Disk (GiB) Price per Hour (USD)
AWS Nano (t2.nano) 1 vCPU 0.50 20 0.0059
AWS Micro (t2.micro) 1 vCPU 1.00 20 0.0120
Azure Basic A0 1 Core (Shared) 0.75 30 0.0180
Azure Basic A1 1 Core 1.75 30 0.0230

Price shown is the listed hourly price for the virtual machine only (It is the on-demand price for AWS) as of 2/26/2017 and does not include any storage or bandwidth costs.
 

Results

CPU Disk I/O
Instance Type Total Time (s) ms per Request Price / Performance Total # of Events ms per Request Price / Performance
AWS Nano 29.60 3.06 0.018 420,748 0.82 0.005
AWS Micro 30.48 3.01 0.036 483,253 0.77 0.009
Azure Basic A0 129.91 21.07 0.379 8,400 66.92 1.205
Azure Basic A1 98.61 17.08 0.393 13,100 50.12 1.153

Price/Performance is requests per second divided by price per hour. Lower is better.

CPU Benchmark Results

Disk I/O Benchmark Results

Price/Performance Results
 

Data

In case you are interested, here is the console output of each benchmarking session:

 

Conclusion

AWS t2.nano instances are a great value if you need to host something that is not too resource intensive. The two Azure VMs I benchmarked just did not stack up well against supposedly comparable VMs from AWS. In fact, the difference was so great, that I even tried running a benchmark for a slightly more expensive Azure Standard A1 instance (standard tier instead of basic tier), but the results were not much better.

The difference in CPU performance might be due to the “burstable” nature of AWS t2 instances, allowing an idle instance to accrue CPU credits that can be used to burst CPU performance when needed (see CPU Credits for more info on that).

The difference in Disk I/O performance is likely due to AWS using SSD storage by default as opposed to Azure which uses HDD by default. Azure does allow you to opt for Premium Storage but that is not something enabled by default when you create a VM, so was not part of this benchmark. If I do another benchmark, I may try using premium storage on Azure.

I also wonder if the Azure benchmarks would be better for Windows virtual machines, and if I get motivated, I may run some follow-up tests to find out. For Ubuntu virtual machines, however, the results clearly show that the AWS instances provide not only the best performance and value, but do so at a lower cost.

Getting started with iBeacons and Windows 10

I recently picked up an Estimote Proximity Beacons Developer Kit, which contains three BLE (Bluetooth Low Energy) beacons that are compatible with iBeacon and Eddystone protocols.

A developer can use beacons to determine whether the user of an application is within the proximity of a beacon (and how far they are away to a lesser extent). One example of how this could be used is by a museum that has a mobile app that can detect beacons and automatically display information about exhibits. A retail store might use beacons to determine when a customer visits their physical location or even detect their movement within a store. A manufacturing company could use beacons to simplify the tracking of equipment or inventory. In short, beacons enable software to become aware of its physical environment.


There are many examples online showing how to detect beacons from a mobile device (iOS or Android), but I could not find much information about how to use beacons from Windows. It turns out beacons can be used by a Windows 10 application and in this blog post I will create a simple console application that can detect beacons.

First, we will need to install a couple of prerequisites on our development PC:

Once you have Visual Studio 2015 and Windows 10 SDK installed, open Visual Studio and create a new Console Application.

Next, add a reference to C:\Program Files (x86)\Windows Kits\10\UnionMetadata\Windows.winmd. You will need to use the browse button in the reference dialog to locate this file. Also note that the extension is winmd not dll, so you will need to change the file filter to show all files in order to see it in the file dialog.

EDIT: Instead of referencing windows.winmd directly, a simpler and more thorough way to use WinRT from a desktop application is to install the UwpDesktop NuGet Package. Install this package in your newly created project.

Now we’re ready to write some code. Because this is an example, everything will be in Program.cs:

using System;
using System.Linq;
using Windows.Devices.Bluetooth.Advertisement;
using Windows.Storage.Streams;

namespace BeaconExample
{
    class Program
    {
        private class BeaconData
        {
            public Guid Uuid { get; set; }
            public ushort Major { get; set; }
            public ushort Minor { get; set; }
            public sbyte TxPower { get; set; }
            public static BeaconData FromBytes(byte[] bytes)
            {
                if (bytes[0] != 0x02) { throw new ArgumentException("First byte in array was exptected to be 0x02", "bytes"); }
                if (bytes[1] != 0x15) { throw new ArgumentException("Second byte in array was expected to be 0x15", "bytes"); }
                if (bytes.Length != 23) { throw new ArgumentException("Byte array length was expected to be 23", "bytes"); }
                return new BeaconData
                {
                    Uuid = new Guid(
                            BitConverter.ToInt32(bytes.Skip(2).Take(4).Reverse().ToArray(), 0),
                            BitConverter.ToInt16(bytes.Skip(6).Take(2).Reverse().ToArray(), 0),
                            BitConverter.ToInt16(bytes.Skip(8).Take(2).Reverse().ToArray(), 0),
                            bytes.Skip(10).Take(8).ToArray()),
                    Major = BitConverter.ToUInt16(bytes.Skip(18).Take(2).Reverse().ToArray(), 0),
                    Minor = BitConverter.ToUInt16(bytes.Skip(20).Take(2).Reverse().ToArray(), 0),
                    TxPower = (sbyte)bytes[22]
                };
            }
            public static BeaconData FromBuffer(IBuffer buffer)
            {
                var bytes = new byte[buffer.Length];
                using (var reader = DataReader.FromBuffer(buffer))
                {
                    reader.ReadBytes(bytes);
                }
                return BeaconData.FromBytes(bytes);
            }
        }

        static void Main(string[] args)
        {
            var watcher = new BluetoothLEAdvertisementWatcher();
            watcher.Received += Watcher_Received;
            watcher.Start();
            Console.WriteLine("Bluetooth LE Advertisement Watcher Started (Press ESC to exit)");
            while (true)
            {
                Thread.Sleep(100);
                if (Console.KeyAvailable && Console.ReadKey(true).Key == ConsoleKey.Escape)
                {
                    break;
                }
            }
            watcher.Stop();
            Console.WriteLine("Bluetooth LE Advertisement Watcher Stopped");
        }

        private static void Watcher_Received(BluetoothLEAdvertisementWatcher sender, BluetoothLEAdvertisementReceivedEventArgs args)
        {
            const ushort AppleCompanyId = 0x004C;
            foreach (var adv in args.Advertisement.ManufacturerData.Where(x => x.CompanyId == AppleCompanyId))
            {
                var beaconData = BeaconData.FromBuffer(adv.Data);
                Console.WriteLine(
                    "[{0}] {1}:{2}:{3} TxPower={4}, Rssi={5}",
                    args.Timestamp,
                    beaconData.Uuid, 
                    beaconData.Major, 
                    beaconData.Minor, 
                    beaconData.TxPower, 
                    args.RawSignalStrengthInDBm);
            }
        }
    }
}

The code above is pretty straightforward. It just creates an instance of BluetoothLEAdvertisementWatcher, starts watching, handles Received events, and displays the information from advertisements that are received.

If everything works, the project should produce output similar to this:

The default UUID for Estimote beacons is B9407F30-F5F8-466E-AFF9-25556B57FE6D and as you can see, I haven’t changed that yet. My three beacons can be distinguished using the Major and Minor identifiers (again the ones set by Estimote, until I change them):

  • 24554:52084
  • 8008:43189
  • 50483:25448

In this post, I showed how to create a simple console application that is able to detect nearby iBeacons. I hope this will be helpful to anyone looking to integrate proximity detection into their Windows 10 applications.

EDIT: You might find that Windows 10 doesn’t seem to be able to detect BLE advertisements as quickly or consistently as Android or iOS. I found an interesting Stack Overflow question that was answered by a user (Emil) who indicated that Windows had a hard-coded scan interval and scan window that would cause it to miss advertisements, only picking up on roughly 1/7th of them based on the scan interval and window numbers he provided. His answer offered a way to use the Windows API function DeviceIoControl to manually initiate the scan using a more appropriate scan interval and window.