These same customers also tell me that as soon as their partners turn into “pick my stack, my full stack – like it or not,” they move from being a partner to being a vendor. When they push that envelope too far, such as cranking licensing/pricing in a negative direction (Oracle is constantly the reference example) – they move from passive dislike into frustration, anger and then full on rage.This is a delicate balancing act. Everyone out there is trying to make it easier for companies to source more from them, but as soon as the choice is not driven by the customer… well, those customers end up feeling like I feel about my cable company: “Their internet rocks. Their set top and cable programming sucks. Their mobile phone sucks. I DON’T WANT BUNDLING.”I also hear another thing more and more often – customers that dig EMC, also dig VMware and Pivotal. They want us to work more closely together. There are customers every day that are going all in on the Federation, and see benefit to their business. And, if you listened to the recent analyst call (transcript here) you heard Joe saying how those customers spend on average 2x more with EMC and VMware and move faster with Pivotal. I want to humanize this for a second. My daughter was in the ER three weekends back (fear not – nothing serious – just a wakeboarding head injury with no permanent damage other than a wound to her pride). During those long hours, EMC, VMware, Pivotal, RSA and the Federation were the last thing on my mind. Afterwards I realized the hospital had a new PACS system – and that in all likelihood we power that system – so in a way we as the Federation had a part to play (a VERY small part relative to the doctors!)Every day families work through healthcare issues that make my weekend detour look easy. These things matter – and are about making people’s lives better – not just driving technology forward.Want another example of making lives better? EMC’s sustainability report is a fascinating read.What’s interesting about these “making the world better” and “getting to an outcome” stories? They don’t start with low-level technology elements. This is difficult to internalize as a technologist who likes “being in the weeds.” Technology matters – in fact it’s central – but the real magic is in “putting it all together.”This is the other macro thing that is driving the “Federation Better Together” for me and may perhaps suggest that the balance point in the balancing act may be moving.Every day I talk to customers, more are saying: I’m done wasting my time on lower-level integration – I want a faster outcome. I know that part of moving faster is focusing less on “lock in” that is no longer material, and more on picking partners that I trust and redirecting efforts higher up the stack – period.A good friend and former colleague (good luck in the new gig Tyler!) did an absolutely BRILLIANT post on “lock-in” here I would highly recommend. His definition is perhaps the best I’ve ever seen: How much friction (and what would it cost to overcome) am I introducing to our environment, and is the value we’re gaining worth it? Through that lens, bad “lock in” is a situation where the friction to move outweighs the benefits of moving and good “lock in” is a situation where the friction to move is dwarfed by the benefits of moving.What people are coming to realize is that lock-in at lower levels of the stack is now at a point where the friction to move is low. This is due to open APIs and open source coupled with much better abstraction through virtualization and containerization. Even moving from one IaaS stack to another is possible (that has more friction though). The new “high friction” comes from data gravity (this remains really hard to move – and steers compute to tend to want to co-locate), governance/compliance/regulation, and hard-coding your app to someone’s API without some open protection (the most dangerous of the these elements).The ultimate proof point (in the lock-in vs. agility debate) is the rapid growth of the public cloud stacks – where you have ZERO control over the stack (or the services).This is all to say the following:When customers want faster outcomes (which is happening more and more)…ANDThey realize that lock-in is a ratio of “friction to move:benefit of moving,” not a monster to be feared…THENThey ask the Federation to come with answers to the questions of ‘what if we were all in with you?’ and ‘does the ratio of friction:benefit work in my favor?’This is the story of so many customers I see around the globe.There is so much to do and we can clearly get better!This question, “what can we do to make the Federation work better, while striving to not remove the strength that comes from preserving the cultures, autonomy, and freedom of motion?”, has been keeping me very busy over the years – and no time more than over the last couple of months.I believe there are six buckets to what we can do to make the Federation work even better (IMHO):Services – a common team approach to delivering on integrated Federation projects; operating based on the best skills to deliver, regardless of Federation team member (we’ve largely cracked this one).Sales – to be applied when a customer says “I want a Federation team where the buck stops – and leverages the whole Federation portfolio for what they think is right.” This means being able to do Federation ELAs, and other operational considerations. We’ve started to apply Federation account coverage for customers who really want to go “all in” with us – early days to be sure, but exciting!Software Defined Storage – together, the SDS portfolio of VMware and EMC is second to none – from extending traditional external storage (VVols/ViPR) and in other cases replacing external transactional storage with SDS (VSAN and ScaleIO). Together we cover vSphere only-use cases, and also any heterogenous use case. Together we offer SDS data planes beyond transactional storage with Elastic Cloud Storage (ECS). Together as VMware and EMC there is no peer in the ecosystem when it comes to a complete, and open SDS portfolio.Converged Infrastructure – a big part of moving faster is abandoning mix-and-match at the lowest levels of the stack. This is causing vendor ecosystems to collapse and things that were obvious in the past simply don’t work anymore. Maybe CI isn’t an ecosystem play anymore? SDS with validated hardware still seems to work as an ecosystem thing (think of VSAN-ready nodes, or the new ScaleIO nodes). vSphere certainly is a massive ecosystem play (massive ecosystem). But when it comes to real CI (not assembled reference architectures), customers are drawing clear lines about stacks that they like and they don’t like – technologically, as well as strategically and through the support lens. New battle lines are being drawn. This is a function of something more fundamental: the new commodity is the full IaaS stack. It’s not to say that all IaaS stacks are the same, rather that IaaS is now the level of infrastructure comparison.Cloud – the consumption model of technology that cloud creates needs to operate at the Federation level. Not that we don’t continue to partner openly with SPs and Telcos, but there needs to be a more Federation-level model for it.A clear strategic position on the infrastructure design point built for Cloud Native Applications workloads. This area (Cloud Native Application and the IaaS that underpins it) is one of the biggest hairballs because right now it seems that depending on the customer, the “pragmatist” and “purist” views each have a place because the landscape is moving fast! More work needed here – but you can see we’re all over it.In the end (and most importantly!), I want to say a huge “thank you” to our Federation customers. Know that we’re working furiously on 1-6, while maintaining our promise to you of always offering choice and embracing an open ecosystem. There has been so much speculation about EMC/VMware lately and I continue to be surprised by how much the speculation feeds itself. One reporter speculates, and then another reports the first as a source. It’s all like a snake eating its own tail.My perspective is based on the customers I talk to and mirrors the one Joe Tucci staked out in response to the analysts on EMC’s most recent earnings call: the customers I talk to want the Federation of EMC/VMware/Pivotal/RSA to be MORE integrated, while fiercely resisting models where things are too coupled.It’s funny because those sound like polar opposites, but I also get where they are coming from. Customers want a loose coupling that gives freedom of choice to pick a part or pick the whole.People jokingly compare our Federation to another Federation. The analogy to the fictional Star Trek Federation is apt beyond the common name. That other Federation is a collection of different planets and cultures. They have different strengths and weaknesses, but come together on common goals. That’s pretty familiar territory for the employees of EMC, VMware, Pivotal and RSA.What do I hear from customers about what they want – specifically?They want to be able to use everything in the VMware portfolio without being obligated to using EMC or Pivotal.They want to be able to use EMC without necessarily using VMware.They want to be able to use Pivotal Cloud Foundry and the Pivotal Big Data Suite anywhere, including vCloud Air and on their VMware-powered on-premises clouds (the most common deployment model for Pivotal Cloud Foundry), but also on AWS, Azure, and others.Many Isilon customers are very happy to see EMC partner with Cloudera. Heck, EMC even resells Cloudera for people who want to bundle these together (El Reg covers that here). Want a pure open source Apache Hadoop distribution instead to align yourself with the Open Data Platform (ODP)? EMC partners with Hortonworks (see that here). A pretty clear example of choice.What about the new world of Cloud Native Applications and how to best support them at the infrastructure level?Some customers passionately believe in a vision of Cloud Native apps that runs on a pragmatic view. This view is that perhaps it’s best to build new apps on the same unified cloud stack which runs kernel mode VMs and containers simultaneously, can present via the vRealize/vCloud APIs and equally via the Openstack APIs, and offers rich virtual infrastructure services when needed, and not when not. This is the Federation Enterprise Hybrid Cloud, which industrializes the VMware stack with rich workflows and integration with a broad ecosystem. The people and process of this “unified cloud” approach often struggle with SLAs, ITIL processes geared to the most legacy app whilst also operating with the agility that the new cloud native apps desire (and demanding none of the infrastructure resilience). This is not a technical issue, but a very real one nonetheless.Other customers are equally passionate in a diametrically ooposed direction – that while you CAN run Cloud Native Applications on infrastructure and operational models designed for classic infrastructure-dependent applications, you SHOULDN’T. Instead, you bias for elasticity, programmability, cloud-level scale and economic models. Beyond the technology, this operational model fits the DevOps cultural model. This cloud usually runs adjacent to a “unified cloud” that powers the traditional applications. Does the Federation have an answer? You bet! This is the Pivotal Cloud Foundry + VMware Photon Platform + EMC VxRack solution which was discussed at VMWorld 2015.Other customers reject VMware’s role in the world of Cloud Native Apps with passion. I think that’s a little foolhardy – because outside some of the SaaS startups I meet with, few customers would see a ton of benefit from building their own Cloud Native unstructured PaaS (DIY PaaS that starts with building on top of Mesos + Marathon/Kubernetes) built on homebrew IaaS models. Outside SaaS startups (which rock with this approach), many enterprise customers go down that path and come back 18 months later saying “help!” VMware can make the Photon Platform the “Enterprise IaaS for pure Cloud Native Apps.” That said – those customers commonly believe in a purist open source model, and bias towards the efforts that Pivotal and EMC are pursuing with Project Caspian as the “industrialization” of a purist open source stack. This is another manifestation of choice.
Technology is no longer just a business tool – it is also helping to solve social issues. Take the question of personal and public security, which is a growing concern in today’s world. For example, as a parent, have you ever had the awful experience of your two-year old, wandering off in a busy shopping mall? One minute, they are beside you. You turn your head for literally a moment and when you look back, your son or daughter seems to have vanished into thin air. The chances are that the child has just wandered off innocently and there is no abduction involved but the panic of that moment stops you straight in your tracks and you are sick with worry until your child is safely located.At the AirportPicture a busy airport, milling with people. A bag – abandoned in the check-in area – has been designated as a potential security threat. It may be an innocent mistake on the part of a distracted passenger or it could represent a terrorist attack and present a risk to everyone at the airport? What does security do? How do they quickly identify the owner?Of course, nothing can ever replace the importance of traditional policing, smart intelligence, surveillance and the presence of police on the ground but the Internet of Things, coupled with secure CCTV technology, is certainly putting real-time data at the finger-tips of both police and security personnel.In the Shopping MallTake for example, the case of the missing toddler. Imagine the security guard, using his/her smartphone – loaded with special software – to photograph the parent for immediate upload into the shopping centre’s facial recognition system. Armed with this image, the system instantly searches the footage from that day and identifies when the parent first arrived at the shopping centre with the child.Having extracted this footage, Security can then enroll the child’s face into the online facial recognition system. This automatically searches for the missing child across all the CCTV cameras in the network, tracking the movement of the child in real time – where they have been and where they are right now. Based on the GPS coordinates, the guard closest to the child is automatically alerted and the family is quickly reunited. This whole process – from start to finish – takes minutes, helping to quickly resolve a very traumatic experience for both parent and child. Real-Time TrackingLet’s switch to the scenario in the airport. The IoT-based CCTV system quickly locates the abandoned bag– even if partially obscured. It then jumps back to the relevant footage and enrols the face of the person who has left the bag there. This image is transmitted to all cameras in the network and the person’s location is automatically tracked in real-time. An urgent alert – complete with a photograph of the person and details of the incident – is automatically sent to the nearest Security guard for action. The likelihood is that the episode was simply an innocent mistake but IoT-enabled personal devices with face recognition technologies, connected to a database of criminals, can proactively warn Police when convicted offenders are in the vicinity.Other DevelopmentsIn other security developments, the New York City Police Department has tested acoustic sensors, which can detect illegal gunshots to provide real-time alerts to police in busy precincts. Many police officers now wear body cams on the beat with studies indicating that they improve self-awareness and help promote the right behaviour from both the police and those they interact with.The bottom line is that police agencies across the world are moving toward more data-driven approaches to solving crimes. Machine learning is particularly good at identifying patterns and can be useful when trying to discern a modus operandi of an offender, particularly in the case of serial crime.Supporting Urban PlanningLet’s switch to a more benign setting. Maybe you work in the local planning authority. How do you make public spaces in the city work better for citizens? What is the air pollution level like at any given moment in time? What streets in the city centre attract the most foot fall? What is the percentage of car users versus pedestrians and cyclists?Data Is the AnswerThanks to the use of sensors, IoT CCTV and analytics, planners can now better understand foot fall patterns – how many people are going where, how and when. It’s important to say that in this instance, people are not individually identified – rather, the planners are looking at aggregated data to help determine infrastructural requirements, like the number of required cycle ways, car lanes, footpaths, parks and bins.There are other potential benefits. For example, business-people looking to open a new shop could potentially be given accurate figures for foot fall near their proposed location to help them assess the potential for their new venture.Smart parking can also use sensors and devices to help drivers quickly locate parking spaces and reduce congestion and fuel emissions. There are also obvious public security benefits. Apart from detecting and preventing vandalism and crime in real-time, in the event of an accident or say an elderly person falling, the emergency services can be automatically summonsed to the scene.Smart PartnershipsSo, what role does Dell EMC OEM play in all of this? The answer is simple. We collaborate with specialist video surveillance and security partners, like iOmniscinet, Milestone, V-5 Systems and Pelco to power their solutions. Our partners provide the IP while we provide the customised hardware platform, and support services.Of course, it goes without saying that criminals and hackers will try to exploit any vulnerability they can find in new security systems. All these interconnected networks and devices need the right levels of security, built in from the start to protect both the cities and their citizens. That is where we can also add value. We have a dedicated focus on surveillance with experts available in our IoT lab to collaborate with specialist video surveillance and security partners.As a society, I believe that we need to continue to respect the importance of individual privacy while carefully balancing this against the need to protect the common good.What are your views on technology being used to improve security and urban planning? I would love to hear your comments and questions.
In deep learning applications, FPGA accelerators offer unique advantages for certain use cases.In artificial intelligence applications, including machine learning and deep learning, speed is everything.Whether you’re talking about autonomous driving, real-time stock trading or online searches, faster results equate to better results.This need for speed has led to a growing debate on the best accelerators for use in AI applications. In many cases, this debate comes down to a question of server FPGAs vs. GPUs — or field programmable gate arrays vs. graphics processing units.To see signs of this lively debate, you need to look no further than the headlines in the tech industry. A few examples that pop up in searches:“Can FPGAs Beat GPUs in Accelerating Next-Generation Deep Learning?”“FPGA vs GPU for Machine Learning Applications: Which One Is Better?”“FPGAs Challenge GPUs as a Platform for Deep Learning”So what is this lively debate all about? Let’s start at the beginning. Physically, FPGAs and GPUs often plug into a server PCIe slot. Some, like the NVIDIA® Volta Tesla V100 SXM2, are mounted onto the server motherboard. Note that GPUs and FPGAs do not function on their own without a server, and neither FPGAs nor GPUs replace a server’s CPU(s). They are accelerators, adding a boost to the CPU server engine. At the same time, CPUs continue to get more powerful and capable, with integrated graphics processing. So start the engines and the race is on between servers that have been chipped, turbo and supercharged.FPGAs can be programmed after manufacturing, even after the hardware is already in the field — which is where the “field programmable” comes from in the field programmable gate array (FPGA) name. FPGAs are often deployed alongside general-purpose CPUs to accelerate throughput for targeted functions in compute- and data-intensive workloads. They allow developers to offload repetitive processing functions in workloads to rev up application performance.GPUs are designed for the types of computations used to render lightning-fast graphics — which is where the “graphics” comes from in the graphics processing unit (GPU) name. The Mythbusters demo of GPU versus CPU is still one of my favorites and it’s fun that the drive for video game screen-to-controller responsiveness impacted the entire IT industry, as accelerators have been adopted for a wide range of other applications ranging from AutoCAD and virtual reality to crypto-currency mining and scientific visualization.FPGA and GPU makers continuously compare against CPUs, sometimes making it sound like they can take the place of CPUs. The turbo kit still cannot replace the engine of the car — at least not yet. However, they want to make the case that the boost makes all the difference. They want to prove that the acceleration is really cool. And it is, depending on how fast you want or need your applications to go. And just like with cars, it comes at a price. After the acquisition cost, the price includes the amount of heat generated (accelerators run hotter), fuel required (they need more power), and sometimes applications aren’t programmed to take full advantage of the available acceleration (GPU applications catalog).So which is better for AI workloads like deep learning inferencing? The answer is: It depends on the use case and the benefits you are targeting. The ample commentary on the topic finds cases where FPGAs have a clear edge and cases where GPUs are the best route forward.Dell EMC distinguished engineer Bhavesh Patel addresses some of these questions in a tech note exploring reasons to use FPGAs alongside CPUs in the inferencing systems used in deep learning applications. A bit of background: When a deep learning neural network has been trained to know what to look for in datasets, the inferencing system can make predictions based on new data. Inferencing is all around us in the online world. For example, inferencing is used in recommendation engines — you choose one product and the system suggests others that you’re likely to be interested in.In his tech note, Bhavesh explains that FPGAs offer some distinct advantages when it comes to inferencing systems. These advantages include flexibility, latency and power efficiency. Let’s look at some of the points Bhavesh makes:Flexibility for fine tuningFPGAs provide flexibility for AI system architects looking for competitive deep learning accelerators that also support customization. The ability to tune the underlying hardware architecture and use software-defined processing allows FPGA-based platforms to deploy state-of-the-art deep learning innovations as they emerge.Low latency for mission-critical applicationsFPGAs offer unique advantages for mission-critical applications that require very low-latency, such as autonomous vehicles and manufacturing operations. The data flow pattern in these applications may be in streaming form, requiring pipelined-oriented processing. FPGAs are excellent for these kinds of use cases, given their support for fine-grained, bit-level operations in comparison to GPUs and CPUs.Power savingsPower efficiency can be another key advantage of FPGAs in inferencing systems. Bhavesh notes that since the logic in FPGAs has been tailored for specific applications and workloads, the logic is extremely efficient at executing the application. This can lead to lower power usage and increased performance per watt. By comparison, CPUs may need to execute thousands of instructions to perform the same function that an FPGA maybe able to implement in just a few cycles.All of this, of course, is part of a much larger discussion on the relative merits of FPGAs and GPUs in deep learning applications — just like with turbo kits vs. superchargers. For now, let’s keep this point in mind: When you hear someone say that deep learning applications require accelerators, it’s important to take a closer look at the use case(s). I like to think about it as if I’m chipping, turbo or super-charging my truck. Is it worth it for a 10-minute commute without a good stretch of highway? Would I have to use premium fuel or get a hood scoop? Might be worth it to win the competitive race, or for that muscle car sound.Ready to learn more? Check out Bhavesh Patel’s high-level Tech Talk on Inferencing Using FPGAs and his deeper-dive tech note on the same topic, Where the FPGA Hits the Server Road for Inference Acceleration.
PARIS (AP) — Activists spread four dead dolphins on the cobblestones outside France’s parliament to urge safer fishing industry practices to protect dolphins from fatal encounters with fishing nets. They unfurled a banner reading “Thousands of dolphins like these are massacred each year in France so that you can eat fish.” Police watched closely as the activists from environmental group Sea Shepherd protested alongside dolphins found washed up on Atlantic beaches in the Vendee region Monday. Activists have long urged the French government to limit the amount of time fishing vessels can fish in certain zones, and to require cameras on fishing vessels to make sure they are employing humane methods.