What is the ‘Internet of Things’ and where does Microsoft sit?

 Contributed by Microsoft Cloud Evangelist Ewan Dalton, first written on his blog the ‘Electric Wand’. He's also written The Enterprise impact of the "Internet of Things"
The term “Internet of Things” (or IoT) has achieved buzzword fever pitch in 2014, thanks in part to a slew of product announcements at the Consumer Electronics Show in January. Combined with high-profile acquisitions (such as Google’s purchase of smart home technology company, Nest), there have been many news stories which associated the subject of the piece with the Internet of Things.
Even people who work in the IoT world sometimes struggle to articulate what it actually is. There are several ways of looking at IoT, however, and some of the scenarios are only being developed now and will become both significant and disruptive in ways that we probably don’t yet understand. There are, in fact, numerous types of “Internet of Things” application.
The consumer market provides plenty of examples of devices measuring and reporting data back to some kind of service that allows users to use that data for some purpose that would otherwise be difficult or impossible without this technology. “Wearable technology” is a category that typifies this approach.
Industrial Internet of Things applications have often existed for years, just under different names – M2M, SCADA, telemetry of numerous sorts – though are being combined in new ways and with new variants of technology, to open up new scenarios such as telematics. Industrial uses could mean using IoT technology to control a manufacturing process, to monitor complex machines in the field, extending even to remotely monitoring cars for the purposes of insurance, road tolling, safety and performance improvements.
Finally, companies will find a way to use IoT technology inside their own environments, offering up data that is consolidated from other systems and collected using sensors, to be combined with customer relationship management systems, building control systems and a host of other uses.
The interesting thing is, the majority of these examples won’t connect the many things to the Internet at all – maybe the devices and sensors at the very edge of the system will be individually addressable, but they almost certainly won’t be directly connected to the internet. Other groups have tried to establish alternative definitions – some talk or a “Sensor mesh” or a “Network of sensors", and Cisco, for example, talks about the “Internet of Everything(and has some other, intriguing ideas such as Fog Computing… it’s like Cloud Computing but nearer the ground). It looks like the term IoT has stuck, at least until we stop talking about it as if it’s something special or something different, rather than just the normal way that these things work.
The definitive definition of the Internet of Things
The term “Internet of Things” was coined in 1999 by Kevin Ashton, from Proctor & Gamble, then at MIT. He later wrote, in 2009:
“Nearly all of the data available on the Internet were first captured and created by human beings—by typing, pressing a record button, taking a digital picture or scanning a bar code. The problem is, people have limited time, attention and accuracy—all of which means they are not very good at capturing data about things in the real world. If we had computers that knew everything there was to know about things—using data they gathered without any help from us—we would be able to track and count everything, and greatly reduce waste, loss and cost. We would know when things needed replacing, repairing or recalling, and whether they were fresh or past their best.
The Internet of Things has the potential to change the world, just as the Internet did. Maybe even more so.”
Analysts express differing views as to the exact scale (IDC reckons 212 billion devices with a market value of nearly $9trillion in only 6 years), but all estimates of the future size of the Internet of Things business are extraordinarily large. If even the lower end forecasts are out by a factor of 10, there will still be billions of connected devices within a few years, and the reason those devices are connected is because they have something to communicate.
clip_image004The secret sauce, the Holy Grail, the raison d’ĂȘtre for Internet of Things is data. That much is pretty obvious to anyone with more than a passing interest in the field – why would you go to the bother of deploying a load of sensing devices and the infrastructure to manage and communicate with them, unless the data they provide is particularly interesting?
At Microsoft, we work with lots of partner companies who use our technology to build their own products and solutions. This often puts us into contact with people and organisations who are doing things we’d never expected or even imagined they’d do, and that is one of the reasons why it’s such a great place to work and an amazing ecosystem to be part of.
As part of this working with companies that are beginning to inhabit this growing Internet of Things niche, a special interest group sitting in Microsoft UK has drawn a few interesting, and sometime controversial, observations:
    • No one technology or technology provider will own the IoT, and a lot of systems will use a smorgasbord of standards and components
    • Scaling a system that manages a few hundred gadgets to one dealing with hundreds of thousands of sensors is very hard, as is managing and analysing a massive quantity of data
    • There are “stacks” within IoT:
    • Sensors: the “things” in the IoT, massive in number but small in compute power
    • Hubs: the concentrators which harvest data from sensors, provide some degree of control, logic and processing and ultimately pass the information up the chain
    • Comms: many incompatible but functionally similar wireless standards will connect sensors to hubs, and hubs to the…
    • Cloud: the place where the data is brought back to, where analysis can take place on it and where insights can be passed on to other systems or even back to the devices
  • The real value will come from “latent data”
What is “latent data”?
To a large degree, the IoT is an emerging set of technologies, protocols and patterns for the collection, aggregation, analysis and actioning of intrinsic, latent data, and the management of this process.
Data is ubiquitous and inherent in all environments, be it an outside space, an ecosystem, a manufacturing complex, a supply chain or a city. This data can be regarded as “latent data” or “potential data” in the physical world – the data exists but is not accessible, or if it is accessible then it is of limited use since it is not combined with other, relevant data (such as historical readings, or data from complementary systems). Maybe the data is being accessed by some silo’ed system which uses that data for its own purposes but was never designed to provide any wider access to it.
Every physical thing has properties and attributes which may be discernable but are probably not being measured. A mechanical thermostat has intrinsic data on the temperature of a room and its own state, but this data remains in the physical world. A light bulb could be measured to see if it’s on or off, but this only becomes truly interesting when we could measure all the bulbs in a building, or a facility, or a city. If we can sense when all the bulbs need replacing, or alter their individual brightness depending on other conditions, that’s even more interesting.
For the avoidance of doubt, “Latent Data” is also a legal term applied to deleted files that need special forensic tools to extract… we’re talking about a more ethereal concept here, that there is data all around us in everything, but it’s untapped – and therefore, latent – unless we specifically decide to measure it and do something with it.
We believe that IoT is fundamentally about bringing this latent, intrinsic data into the digital world, in a way that allows the creation of value. This value is due to the aggregation of collected data, its analysis, and the use of that insight to drive decision-making and actions. Freeing data that is locked inside existing proprietary systems is another source – it may be that systems built for a specific purpose generate data that could be useful outside of the context that was originally intended, and if it can be shared with other systems (albeit with the right degrees of control), incremental value can be realised.
The Cloud is the place where this data will be collected, where the data is likely to be stored in the long term, and where data aggregation and analysis will (mostly) occur.
clip_image006
he IoT patterns, the technologies and protocols that allow for this aggregation of latent data, are similar in a way to the OSI 7-layer networking model – the stacks which encompass devices, communications and Cloud. There are differing degrees of abstraction between these stacks and their constituent layers which means the IoT is inherently (and to the benefit of everyone within it and using it) a heterogeneous world.
Microsoft’s role in the Internet of Things
Microsoft has developed embedded systems that run in billions of devices already, and some of these could be considered part of an “Intelligent System” that forms part of the IoT, though many of the billions of devices being forecast as part of IoT won’t be running a highly functional operating system or perform anything more than perfunctory processing of data.
Microsoft has a hugely scalable and low cost cloud computing system in the Microsoft Azure platform, where IoT applications can be quickly deployed and where the data that results from them can be securely kept and worked on. Many partner companies already have IoT applications running in Azure, and Microsoft is also building technology to help customers get value out of their existing environments.
Almost all IoT applications are likely to generate large volumes – petabytes, even more – of data, which will only become valuable when it is cost effective to keep it for a period of time and to perform large-scale computational analysis, both of which are difficult to do or economically unviable without the availability of public cloud computing.

Lots of developers are building systems that fit within the definition of IoT using systems like Raspberry Pi or Arduino, writing code in Python or Java and storing data in some form of NoSQL database… and that’s just fine by us. We think we have just the cloud services these developers need to build and run their code, whatever the technology they use, and the Azure platform provides you – as the end customer of the 3rd party solution – with a secure and easily managed place to put the applications and to hold the data.
With Microsoft Azure providing the backplane for these billions of devices to communicate – whether they are running Microsoft software or not – and to store and analyse their data, there is an opportunity for us and our partners to enable and monetize far-reaching change.


Office 365 MCSA – Post Exam feedback!

If you have been following my journey to the Office 365 MCSA certification in previous posts, you will know that I was scheduled to take the exam on Wednesday  30th July whilst attending the Microsoft internal TechReady conference.
Not many people outside Microsoft will have heard about TechReady, that is because the event is for full time employees to learn the new products strategies and road maps for the following six months and for very good reasons this is not broadcast like the TechEd events.
The similarity between TechEd and TechReady includes the provision of Certification Prep sessions and a Certification hall for attendees to schedule and take MCP examinations whilst onsite.
I don’t have the stats for numbers taken and passed during the week but I can tell you that i was in the largest Prometric exam hall I have ever seen, there were over 100 testing stations and a whole bunch of proctors to keep an eye on us.
I had spent several evenings in Seattle with my nose buried in the books (or screens) to ensure that I was successful. The conference pretty much runs from 0700 to 1900 every day so the extra effort to study through jet-lag was no little work. (I don’t travel well and the 8 hour time difference was a not insignificant factor in my studies)
I did enjoy the surroundings of the Seattle Sheraton though and the weather was superb all week.

study1Having attended a prep session for the 70-346 and 70-347 exams on the Tuesday afternoon I decided to reschedule the exam to that evening. So in I went and sat the test.
I will leave the result until the end of the post but I will say that without the level of study I would have done much much worse. Obviously the NDA you agree to when taking exams prevents me from disclosing the details of the exam, but I can say a few things which will help those of you preparing for the experience of 70-347. Enabling Office 365 Services.
The exam I sat was a standard one with a wide range of the item types I have listed before. These included PowerShell build questions, drag and drop, hot spot and other types.
The overriding point I would like to make is that the exam was already significantly out of date in a few ways. Anyone that is a Microsoft Azure or Office 365 user will know that these cloud based Iaas, Paas and Saas products change almost weekly.  (Why not sign up for free trials now and check them out?)
This means that screenshots in your exam may not look like the current product, I had several instances where this was the case. The strategy for exams based on these products is under review (I met and spoke to the LeX team whilst at the conference).
I am prevented from giving too much detail and giving the number of questions is fairly pointless as these change often and when new questions are under test, they are added in too.
I can and will say that the exam is a fair test of the whole product, covering Sharepoint Online, Lync Online and Exchange Online as well as the Office 365 portal and admin consoles and the PowerShell required to run these products from the commandline. It was not easy and it caused me no end of headaches in the review of the questions.
Top exam taking tip from me is that your first answer is normally your best one and changing your answers just gets you in a mess.
I am pleased to say that I was successful with no wasted effort, I scored exactly the mark required to pass, no more, no less. As you can see from the exam sheet below.

o365pass
If you follow the Microsoft Pychometrician Liberty Munson on Born2Learn, you will know that a score of 700 is not 70% and does not reflect a result worse than a score of 800 or 900. Personally I have never understood this and look forward to someone explaining it to me in words I understand. I have shown this score to point out that even though I studied really really hard and know the product very well indeed, and I love PowerShell, had I got one more question wrong, I would have failed this test.
So study hard, study the correct things and Good Luck.
The bottom line, however, is that I now have the MCSA Office 365 to add to my transcript. This will help me in my career as a Technical Evangelist, will flag me up on LinkedIn to people looking for speakers and allow me to teach the two courses related to the certification.

mcsacerto365





Ethical Hacking for an IT Professional

 The following article is contributed by Richard Millett, a Senior Instructor for Firebrand Training with 30 years experience specialising in Networks and Security.

As an IT professional having just acquired the skills to administer your assets, and the certifications to prove it, the next stage has to be to ensure they are secure. Security is not just a question of firewalls, anti-virus and permissions. Security is the much wider topic of protecting the entire footprint of your organisation both technically and physically.
The current MCSA and MCSE certifications prove you can administer your systems on a day-to-day basis to provide functionality and reliability but you now have to go one step further to ensure an adequate level of security. Using the services of a professional penetration tester can help to measure your security status and help you move to a consistent level of security, but why not take the time to progress your knowledge to the next level by training in the area of ethical hacking.
Hackers do not play by the rules and the attacks are getting progressively more sophisticated both against networks and users. Attacks are remote, launched over the network, and initiated from the inside of the network, so the ability to be able spot and mitigate both types of attack is now vital.
Ethical hacking is the process of providing protection by measuring security using the same tools and techniques as hackers would do, but within an agreed framework. Understanding networks is one thing but learning how to scan and probe networks, how to manipulate network packets, how to launch attacks against your servers and clients, this is the world of the ethical hacker. Understanding how the hackers get in is the key to keeping them out. By understanding the hacking cycle from reconnaissance to covering tracks helps a good IT professional to improve the security status of not just the network and systems but of the entire organisation.
You have to remember that vulnerability scanners can check your systems and ensure that they are patched to the correct level but if you can run a scanner against your own systems, could the bad guys do the same?
By mastering the same techniques as the hackers you are able to stay one step ahead by ensuring that their methods won’t work on your systems. System and network security is dynamic in a world of constantly evolving threats. An IT professional needs to be equally dynamic by mastering the latest techniques involved in cyber-crime. To be an ethical hacker requires a good knowledge of all aspects of IT infrastructure from networks to web to wireless. These are all ways in to your systems and the damage caused, be it reputational or financial, could be irreparable.
Don’t forget physical security either, think of the following questions:
  • Is your server room access as secure as it could be?
  • Would it be possible to install a hardware key logger on one of the systems?
  • How aware are you of social engineering techniques?
Consider also the well-known system threats against your systems:
  • A good MCSA in SQL will want to ensure that the databases that they are responsible for are not being exploited by known SQL injection vulnerabilities.
  • A .NET developer will not want to be the guilty party when an application suffers from a buffer overflow attack.
  • What happens when a well-crafted DNS poisoning attack is launched?
There are a number of either self-study or instructor led training courses available in all aspects of security from recognised vendors. Ethical Hacker training stretches the mind and becomes self-fulfilling because it not only improves your security posture but also creates that urge to stay one step ahead of the hackers, thus driving the need to learn more. To cement the knowledge acquired there are several recognised certifications from the SANS Institute and EC-Council that provide confirmation of the level of expertise. These certifications can give employers a good indication of not only your skill set but of your mindset as a true IT professional.

Why have a dog and bark yourself

Back in the sixties my Dad worked in an IT department where there were about a hundred people just to operate the one ICL mainframe in that data centre. These operators had banks of lever arch files containing instructions to handle every aspect of the day to day running of this environment from changing tapes, to setting up and executing programs like the weekly payroll run. When I started my IT career in the eighties I could a lot of this setup in shell scripts on my Unix Data General server and I could look after backups and updates all by myself and of course the kit was much more reliable. Moving forward to today we seem to have lost some of these scripting skills and seem to be content to use the UI.
However if you want to manage servers at hyper-scale (1 IT admin to every 1,000 +VMs) then logging into each one and changing them is simply not efficient enough. Also this approach is just as inefficient at smaller scales - say just ten VMs because maintenance will only be done occasionally and the tools will be unfamiliar meaning that changes will take longer than they need to and possibly lead to errors. If you have read any of my stuff or seen me present over the last year you’ll know the solution is PowerShell. If that was true a year ago it’s even more relevant now as a couple of interesting technologies have quietly been released that enhance management of virtual machines and services...
  • PowerShell 4 has introduced the concept of desired state configuration where an xml schema is used to establish what the state of a server should be and then this can be used either to test or enforce that configuration on a given set of servers. At the simplest level this could be a set of features and setting on a given server, through to ensuring that given files and versions of applications are also installed. This is useful in setting up load balanced web servers which must be identical and also for Session Hosts in Remote Desktops Services.
  • Windows Azure Pack allows you to run the management portal used by Microsoft to allow you to create services in Azure on your own servers. It builds on System Center 2012R2, specifically Virtual Machine Manager and Orchestrator, but makes calls to these services using an adjunct to the Azure Pack called System Center Automation or SMA. This is PowerShell based but is a classic 3 tier service of a load balancer with worker roles processing tasks which is driven by a database backend (SQL Server or MySQL). This is an important distinction because while normal PowerShell will fail if the server it’s executing from fails SMA PowerShell Runbooks (as distinct from those written in Orchestrator) are resilient. The PowerShell itself is quite different for example it can roll back to designated checkpoints within a script if a failure occurs and then be rerun from there. The Azure Pack also allows you to fully package a Virtual Machine based on a Virtual Machine Template but here you can inject packages to run inside the VM after it’s created and accept the parameters from the Gallery Wizard just like you can in Azure. It’s also possible to quickly create your own gallery images on Azure itself in much the same way..imageAzure Portal show the end user experience of creating a VM

    The basic thrust of this is that where until now Microsoft has given you the nut and bolts of Cloud OS for you to automate your own data center  there is now more of a focus in managing your data center in exactly the same way as Azure which means that one IT admin could potentially manage thousands not just hundreds of VMs. Even if you don’t have that scale then you’ll get time back and be more agile. That is important as I talked to more than one customer who still have to wait weeks for VMs to be provisioned on VMWare and actually that’s not a VMWare problem per se, that’s an IT department that hasn’t got their heads around process standardisation and automation. So in those cases the tech savvy user simply fires up VM’s Amazon Google or Microsoft and bypasses the IT department road block. This then grows into a bigger problem as they build trust in those platforms and so more work will head off to the cloud meaning the over controlling IT admins have lost the very thing they wanted – control!

Internet Explorer to begin blocking out-of-date ActiveX controls

As you may have heard, Microsoft is enhancing the security of Internet Explorer by introducing the out-of-date ActiveX control blocking feature. This was shipped on August 12, as part of the monthly Windows Update.
Important: based on customer feedback, blocking will not commence until September 9, 2014, providing time for deployment and test in your environment. The out-of-date ActiveX control blocking feature, will still be distributed on August 12, 2014, together with documentation and related Group Policy templates as detailed in the resources section below.
What does this mean for your organisation?
As of September 9, 2014, if your organisation has a dependency on outdated versions of Java in the Internet Zone in affected versions of Internet Explorer, you will be impacted by this change.
Users will begin to see blocking UI outlined here—note however that this UI can be clicked through, which allows a webpage to load an outdated version of Java on a one-time basis.
If this is an unacceptable breaking change for your organisation, you have the following two options:
· You can turn the feature off entirely, via the Turn off blocking of outdated ActiveX controls for Internet Explorer Group Policy setting (or corresponding registry key). Note however that this is the less secure option to adopt.
· You can turn the feature off on the specific domains on which your organisation has an out-of-date Java dependency, via the Turn off blocking of outdated ActiveX controls for Internet Explorer on specific domains Group Policy setting (or corresponding registry key).
If you’re unsure of whether your organisation has a dependency on outdated versions of Java, or, you’re unsure of what specific domains in the Internet Zone in your organisation have a dependency on outdated version of Java, then use the Turn on ActiveX control logging in Internet Explorer Group Policy setting (or corresponding registry key). This will help you inventory the ActiveX controls being loaded into Internet Explorer in your organisation, and this information should arm you to answer the questions above and configure and test this feature correctly. Note that you can turn this policy setting on or off regardless of the Turn off blocking of outdated ActiveX controls for Internet Explorer or Turn off blocking of outdated ActiveX controls for Internet Explorer on specific domains policy settings. It can be enabled starting August 12, 2014, once the cumulative update containing this feature is installed and the updated inetres Group Policy settings have been installed as well.
Recommend Action:
 Please make sure that you perform the appropriate level of testing and ready your business for this update to go live and begin blocking outdated versions of Java starting September 9, 2014

5 quick wins for implementing Microsoft’s Enterprise Mobility Vision (+ learn how to do it!)

There are at least 5 quick wins you can get from implementing Microsoft’s Enterprise Mobility Vision: Epic Reports that tell you about potential security breaches; get a handle on where your data is going with Cloud App Discovery; be better than passwords with simple to implement multi factor authentication; understand your users devices with workplace join and give your users devices they’ll love.
The client management space is changing: when we look at information from Forrester we see that 40% of companies said that BYOD programs are a high priority and that many of us (classed as information workers) are using more than one device. That doesn’t mean that the traditional client management space goes away, rather that it’s augmented with new capabilities to support those workloads. A few months back Brad Anderson, CVP Enterprise + Client Mobility started an excellent blog series defining and expanding upon our enterprise mobility vision: 5 quick wins for implementing Microsofts Enterprise Mobility Vision (+ learn how to do it!) 
…to help organizations enable their users to be productive on devices they love while protecting the company.
This is the first post in a series during which I’m going to expand on some of Brads key points and give you practical ways that you can immediately start to give value back to your business by implementing our vision. I’ll help you solve your mobility challenges (please note that doesn’t mean I’m going to solve the issue of you being stalked on Facebook by that ex, let’s keep this on enterprise mobility!)
On that note, let’s get specific – tell me your mobility challenges in the comments, I promise to read them all and help solve some of them.

Step Zero – Try Stuff

The very first thing you’re going to want to do is to try things out. We all like to build a lab to understand the technology intimately. To be able to do this you’ll need to lay your hands on some evaluations and trials, luckily we’ve done everything we can to make that easy for you: Take the Empower Workforce Mobility learning path on the TechNet Evaluation Center. Of course I’m not going to leave you to do that on your own, you can sign up for the trials you need and I created this handy video to help you out.

Quick Win 1: Epic Reports

This is my favorite first thing to show people about our mobility offering because it’s simple to implement. As soon as you’ve created an Azure AD tenant (which the above video shows you how to do!) and you’ve created a user either in the cloud (IT Pro test: figure this bit out yourself) or you have some users synced from on-prem AD then you can get going. Follow these steps and in about 5 minutes you’ll see the power of Azure AD reports…
  1. Download the TOR browser (do this in a lab that’s NOT on your corporate network)
  2. Use one of your user accounts to log into myapps.microsoft.com a few times (do it about 5 times)
  3. Go to the Azure portal and using your admin account go to your Directory then go to Reports and select Users with anomalous sign in activity.
Now you should see something like this: 5 quick wins for implementing Microsofts Enterprise Mobility Vision (+ learn how to do it!)This is showing that one of my users logged on from places she couldn’t have travelled between in time and was attempting to mask her IP. This is telling you that her account has probably been compromised. I bet you don’t get that with on-prem only AD or any other identity provider.  Show this to your ITSec or CIO and they’ll ask you to show them more. The best thing is that the other reports are even better: I call them “big data for the IT admin” but that’s for another post in the series. Let’s not stop with the quick wins though.

Quick Win 2: Know where your data is going

You know your users are getting around your “no personal cloud storage” policy but you don’t know how or to what extent. I hear this all the time from the admins I talk to (and the CIO is probably loosing sleep over this too). Again we have a tool that can give you quick insight: Cloud App Discovery. This tool is very simple but highly effective, install the agent onto Windows PCs in your company and the PC will report back to YOUR Azure tenant information about the cloud services being used on it. So if your user decides to copy data to Box.com through the browser – you see it in the report, or it they do it through installed software – you see it in the report. You can also see who the user signed into the PC was and how much date they transferred.
5 quick wins for implementing Microsofts Enterprise Mobility Vision (+ learn how to do it!)
In the report above you can see that one of my users has used a variety of services, the types of those services, the names of them and the amount data they’ve transferred. As a bonus all the apps with logo tiles in the top right quadrant can instantly be managed as SaaS apps through the portal, but again more in a later post. For now though, download the Windows 8.1 evaluation, install it and then try Cloud App Discovery.

Quick Win 3: Be Better than Passwords with MFA

As soon as you have users in the cloud and you have Azure AD Premium you can enabled Azure Multi-Factor Authentication (you have trial if you followed the advice in Step Zero). Once enabled for a user when that user signs in next they will be asked to verify their contact phone number by opting to receive a call or text. Subsequently their sign on will be a little different but a lot safer:
  1. They attempt to sign on
  2. Correctly enter their password
  3. Azure MFA steps in and calls or texts them
  4. They answer or get the SMS code and enter it
  5. Their sign-on is complete.
This simple additional factor requires that the user knows and has something: raising the safety level quickly. In production you might not only have cloud users but this can now be implemented through Azure AD for all on-prem AD users that are synchronized to Azure AD without the need for on-prem server deployment. Like all our solutions you can embrace the power of AND – on-prem and cloud. MFA is very flexible and I’ll cover it in more detail in a later series post.

Quick Win 4: Know your users devices with Workplace Join

When a conversation gets passed “I don’t know what cloud apps my users are using” the conversation normally moves onto “I don’t know what devices they’re using”. For the past 15 years we’ve had Domain Joined devices – company owned, company managed devices. The real point of domain membership is to give Windows devices identity – but you probably don’t want devices that the company doesn’t own joined to you domain (and users really don’t want the GPO that deploys the corporate wallpaper on their device!). iOS and Android devices obviously don’t support Domain Join either. Workplace join steps in and helps out. It works with all the most common devices and you can use it to permit and deny access to corporate resources with conditional access. It takes a while to implement a Workplace Join scenario so why do I call it a quick win? Well not all quick wins happen in 10 minutes: sometimes they take a while to implement but become fruitful quickly. If you implement workplace join you’ll quickly start finding out what devices your users are trying to use – that can inform policy – but policy you’ll be able to implement quickly. Luckily you can try it out in about an hour with the labs in our tech journey!

Quick Win 5: “devices they love”

The quickest win I can think of is to stop trying to please everyone all the time – it just makes everyone unhappy. Your users will love (and therefor keep using) devices that get the job done for them in the way they want it done. Sometimes that will be them selecting the device, sometimes if will be IT selecting an array of devices for them to choose from…sometimes it will be a task-specific device. In essance the quick win is to think of managing only three device types:
  • Employee owned, company enabled
  • Company owned, employee enabled
  • Company enabled only.

7 Simple Tips To Prevent Web Application Security Breaches

Picture of Nazar TymoshykNazar Tymoshyk is a Security Consultant at SoftServe Inc. He has 5 years experience and a Ph.D. in Information Security and over seven years in network infrastructure management, and specializes in Security Consulting, Enterprise IT Consulting, Application Security Assessments, Penetration Testing, Ruby, OWASP, Linux, Virtualization/Cloud, Automation, Networking, Forensics, and Reversing.
It doesn`t matter which framework you’ve selected for Web application development, you still need proper Web application and server maintenance to avoid security breaches. Learn seven simple tips to mitigate Web application security risks and ensure Web application security.
Web Application Security Risks
In my career, I have regularly seen cases when the lack of proper web server support and maintenance resulted in a company`s Web application being hacked and exploited by attackers. Even though more and more often companies host their web applications in the cloud and select a private cloud for the web applications that are critical to their business, it makes no difference from a hacker`s point of view. So when talking about web application security, it`s important to consider the infrastructure of wherever the web app is hosted (even if it`s hosted by one of the top cloud players).
Unfortunately, it’s not unusual for businesses to invest into Web application development only or store all their Web applications (as well as mail servers) on a single dedicated machine without an established and safe backup process and without considering the security of the infrastructure. Additionally, if a company lacks a comprehensive security strategy and prefers to overlook a well-known security principle of “better safe than sorry”, a Web application administrator may not be ready for real-time attacks, which can result in Web application being down and sensitive data compromised.
Sure, skimping on server and Web application maintenance, regular security check-ups and trainings will save you money in the short term, however, in the long run it`ll save you more if you invest into a secure server hosting provider and proper software architecture instead.
A simple truth is, it doesn`t matter which framework you selected for Web application development a couple of years ago – Joomla, Wordpress, ASP .NET or Java – over time they all need to be patched for discovered vulnerabilities and require regular security check-ups. The frameworks provide a fast and cheap way to create great Web applications, so businesses large and small continue using them despite the security risks presented by possible vulnerabilities, but what`s important is to specifically focus (and many large brands do) on proper Web application security and maintenance.
Secure Software Development: Levels of Responsibility
Owning an internet Web application is similar to owning a car – both require upfront costs, a maintenance program to keep them running smoothly, ease of use and ultimately should attract people to purchase. To properly maintain your Web application:
  • Check for vendor notifications about updates and patches or withdraws
  • Buy insurance to protect yourself against risks.
Web application security starts with a developer who writes secure code. Then, a Quality Assurance expert tests the code for bugs and possible vulnerabilities. Next, the Development Operations (DevOps) team is tasked with automating build processes, patching application and server software as well as monitoring performance and log files. At the next stage, a Security expert should review the results with security in mind.
Any mid-size or large company has an individual responsible for IT, often the CIO but sometimes this role is combined with the CTO and even the CEO. This person is responsible for IT decisions on support and Web application operations, as well as for preventing Web application security breaches, as it is the IT staff’s responsibility to support the company`s servers. A part of this process is designing backup and recovery plans for "after-an-incident" cases. Continuing with the car analogy, it’s similar to ensuring your spare tyre is functional in the event of an emergency.
When IT engineers (or a software development vendor) develop software, the CTO/CIO should define where to deploy it (on separate servers in the Cloud or special containers, versus all sites on a single server) and how it should operate and be protected. Otherwise, ask your internal (or vendor`s) security consultants to design and implement a proper security strategy.
Seven Simple Tips to Ensure Web application Security
1. Educate your organization. Inform employees that Security experts need to ensure that an application is secure in code and design. Explain that DevOps experts are needed to implement monitoring and patch management as well as to secure support of your server and software. Security often goes hand in hand with DevOps, architecture assessment and business analysis.
2. Don’t put all of your eggs in one basket. Do not store all Web applications on a single server. It is architecturally incorrect and could negatively affect Web application performance. Using Microsoft Azure for your web apps has already proved to be an effective way to significantly decrease costs and create truly flexible and reliable solutions in the Cloud.
3. Patch your web apps and web server. Regardless of what framework is used, it’s important to remember that none are a safe haven for your Web application. All of them have some vulnerability that needs to be addressed.
4. Store your access keys and passwords securely. There have been far too many cases of hackers attacking developers and IT guys to steal ssh or cloud access keys to take it lightly.
5. Engage a DevOps and/or security service provider. All Web applications need regular check-ups for the code and server security reviews & assessments. If your organization does not have internal experts, you can ask a security vendor to help establish a comprehensive security strategy and develop a plan for regular security check-ups.
6. If you`re outsourcing Web application development, make sure that security is part of the deal. Discuss the security maintenance and check-up possibilities with your vendor. For long-term strategic partnerships, consider a shared responsibility model.
7. The greedy pay twice. It`s best not to skimp or cut corners on security, especially if you’re responsible for protecting the sensitive data of your Web application users. Security is a significant part of quality service and customer satisfaction. If you do not secure your Web application and data upfront, you can end up with additional unexpected costs.
This post was originally written for SoftServe

The Black Death - A TED-Ed Lesson

<iframe width="640" height="360" src="//www.youtube.com/embed/ySClB6-OH-Q?feature=player_embedded" frameborder="0" allowfullscreen></iframe>
One of my favorite examples of using video to teach short lessons is the Black Death in 90 Seconds. It's a simple video that demonstrates that you don't have to create a fancy video to deliver a quality lesson. While the video is good, it's not a complete lesson on its own. To extend the lesson take a look at this new TED-Ed video, The Past, Present, and Future of the Bubonic Plague.

Popchrom Can Alleviate the Feeling of Dread When Your Inbox is Full

 
Popchrom is a Google Chrome extension that could change the way you feel about your email inbox. Popchrom is a Chrome extension that allows you to create keyboard shortcuts for inserting large chunks of text into an email. So instead of retyping a message you can simply hit your keyboard shortcut and insert a big chunk of text.

After you install the Popchrom extension enter chunks of frequently-used text into your Popchrom settings and assign a keyboard shortcut for those chunks. Then when you need to write a response to an email you can use those keyboard shortcuts to have your message created for you.

Applications for Education
If you find that you often type the same type of message to parents, students, or colleagues, Popchrom could save you tons of time.

Share Lesson Outlines in Google Calendar


I've done a lot of workshops about Google Apps over the years. In the section about Google Calendar I always share my favorite use of Google Calendar. That use is sharing lesson plan outlines in the events on a Google Calendar.

In my Google Calendar account I always create a calendar for each the classes that I teach. Then in those calendars I create events for each class meeting. In the event details I include a short outline of that day's lesson plan and or objectives. If I have hand-outs for that day, I include those as attachments. Click here for a sample of how the students will see those events. Click here to learn how to add attachments to a calendar event.

20 Tips to Increase Sales

Someone recently asked me, “Mike, you’ve been selling all your life. Tell me, what’s the secret to becoming a top performer?”
This got me thinking about whether I could distil the many elements of sales success down to 20 key principles. I thought about the habits of the super salespeople I’ve known over the years and came up with this list. Adopt one of these habits each day for the next 20 days, and you too could be a top performer.
1. Start and finish the day positively. Top performers are on their A-game from when they wake up until they leave the last prospect, or customer, of the day. Being positive makes your prospect positive — about buying.
2. Be an enthusiast. People are drawn to enthusiastic people. Genuine enthusiasm is catching; it’s like a tidal wave that will carry your prospect along to the sale.
3. Plan every call. Understand what you expect out of each interaction with a prospect. Consider whether the objective is to get the sale today, or whether this is a step along the way to a sale.
4. Use the power of knowledge. If enthusiasm is catching, so is someone who truly knows stuff. We are drawn to people who possess knowledge about something we are interested in, whether it’s a particular sport, art, food, or, more importantly, something we are considering purchasing. Become an expert in what you sell, the industry behind it, and the market you are selling to.
5. Demonstrate your expertise. Find opportunities to show people that you’re an expert. Offer free seminars to prospects or existing customers; produce a newsletter or write a book; record a podcast, or create a Facebook page. Whatever you do, become the guru in your field and people will find their way to you and buy what you sell.
6. Research interesting anecdotes, information, and jokes. I often get e-mail from friends, business acquaintances, and others that contain jokes and other useless detritus, but occasionally a snippet of fascinating information appears. When this happens, I am grateful to the person who sent it, because they are making me look good in the eyes of the people with whom I in turn share it. This is why jokes have been a staple of top performers since the dawn of selling.
7. Spend more time prospecting for companies and people that need, want, and can afford what you are selling. Don’t waste your valuable time and energy selling to people who are not highly likely to buy. Think long and hard about your target market. Top performers spend less time with prospects than the average salesperson because they have pre-qualified them.
8. Set yourself goals and targets. Super salespeople don’t wait for their sales managers to give them targets; they set their own. It’s a winning habit to set yourself targets based on the number of leads generated, calls per day, appointments made, presentations made, and sales achieved. If you can measure success by it — target it!
9. Identify your prospect’s behavioural style within 60 seconds. One of the keys to successful selling is to become a chameleon. We can all sell to people who have the same personality as ourselves; the trick to super sales is to relate well to people unlike you, or even with the opposite personality or social style.
10. Sell yourself first. Once you recognize the prospect’s behavioural style, it’s a whole lot easier to react to them in a way that will make them feel comfortable. The key to selling to anyone is the ability to make them like you. People don’t buy from people they don’t like — it’s that simple.
11. Ensure you are selling to the right person. This is a rookie mistake that happens all the time. Salespeople home in on people that look easy to sell to and spend inordinate amounts of time trying to convince them to purchase something. Before you waste any time on a potential prospect, spend a few minutes talking to them. Discover whether he or she is a bona fide prospect. The quicker you discover they aren’t, the quicker you can start selling to someone who is
12. Track your sales progress. Every day, assess how well you are doing in moving toward your goals. Motivation comes from seeing that you are exceeding them, and when you’re not you’ll know you need to pull your finger out, pronto
13. Learn to love objections. Poor salespeople avoid objections as if they are bad. Top performers not only welcome them, they dig for them. As long as there is an unspoken objection, you won’t get the sale. Get into the habit of listing all the objections people might have for not buying what you sell and come up with answers. That way, when an objection arises you have the answer ready at hand.
14. Probe, clarify effectively, and listen. Constantly ask questions to make sure the prospect is hearing, and understanding, what you are telling them and clarify any misunderstandings.
15. Use interesting presentation materials. The more involved your customers are with your presentation, the more likely they are to buy. Use samples, demonstrations, colourful sales literature, or whatever is relevant to your product or service to generate interest and excitement.
16. Keep extensive notes. Top sales performers know their customers’ birthdays, children’s names, hobbies, likes and dislikes, and anything else that will help build a relationship with them.
17. Use trial closes. Get into the habit of asking prospects if they like aspects of what you are selling. This will provide an indication as to whether they are leaning toward purchasing, or highlight potential objections.
18. Ask for the sale every time. This is probably the oldest piece of advice out there, but at the end of the day, more sales are lost simply because no one asked for the order. Come up with several phrases that you feel comfortable with, such as, “So, delivery next week is OK for you?” or, “OK, so let’s write this up.” In my experience, this type of close is the most effective and easiest to employ.
19. Evaluate every call. Remember sales targets and goals, and tracking your progress? Well, it’s not just about the numbers: after every sales interaction, carry out a postmortem and look at what went well and what could be improved.
20. Follow up every call. Following up after a call is not just polite, it’s good business practice. It’s far less expensive and takes a whole lot less time and effort to sell to an existing customer than to try to find a new one. Start building relationships by following up each sale and then regularly thereafter.
Written by Mike Wicks for Douglas Magazine.