Monday, August 12, 2019

Dell EMC Goes Head-to-Head with Competitors at ISC 2019

The annual worldwide supercomputing conference (ISC) is a period when the world’s greatest and finest HPC system vendors combined efforts to share their insights in presentations, showcase their latest innovations, and compete for that attention of the profession and it is customers.

Which was certainly the situation this season in the ISC 2019 conference in Frankfurt. The roughly 3,500 attendees who went to the 5-day gathering got phone the best within the HPC world and also to listen to a few of the brightest minds in the market. And that’s the actual way it was when distinguished technologists went mind-to-mind within the annual ISC Vendor Showdown session.

Unique to ISC High End, the seller Showdown comprises an mid-day of presentations by industry powerhouses in computing, networking, data and storage. Loudspeakers are permitted only a couple of minutes to provide their organizations’ newest strategies, products or research developments. A panel of expert moderators asks each speaker a couple of in-depth, follow-up questions, and also the loudspeakers have recently 5 minutes to provide their responses. Once the presentations are complete, the crowd votes to pick a champion.



Within this fast-paced showdown, Dell EMC’s Jay Boisseau, an AI and HPC technology strategist, gave the Dell EMC presentation. Listed here are a couple of highlights from his presentation:

  • Dell EMC has twenty years of leadership within the HPC cluster space. The organization built it first HPC cluster in 1999, and today builds TOP500 clusters for the kind of Texas Advanced Computing Center (TACC), Ohio Supercomputer Center (OSC), and also the College of Michigan - to mention a couple of from the organizations which have lately folded out new Dell EMC supercomputers.
  • The Dell EMC solutions portfolio covers the wide realm of HPC needs, from simulation and knowledge analytics to artificial intelligence, machine learning and deep learning. It’s a portfolio according to open standards, optimal configurations and customer choice, also it includes technologies from such HPC leaders as Apple, NVIDIA, AMD, Mellanox, Vibrant Computing and OpenHPC.
  • New records within the Dell EMC server portfolio range from the Dell EMC DSS 8440 server, a computing powerhouse designed for the difficulties of machine learning. It packs as much as 10 full-size accelerator cards in 4U of space, as much as 205W CPUs with accelerators in 35C environments, as much as 10 drives of local storage and extensive I/O options. Better still, it’s all with different design that allows accelerators, storage and interconnect on a single switch for optimum performance.


Let’s give a little drama here. Your competition within the Vendor Showdown session was stiff, pitting Dell EMC against such HPC powerhouses as Amazon . com Web Services, Cray and Oracle Cloud Infrastructure. And today, the envelope, please…

In the end from the loudspeakers had their fifteen minutes of fame, the crowd voted to own top place towards the Jay Boisseau presentation made with respect to Dell EMC. So hats off and away to Jay for any great presentation that knocked the socks from the ISC audience.

Saturday, August 10, 2019

Not Just Another G: The Next Generation

This is actually the second installment within our series Not Only Another G, which supplies understanding of 5G and just what this means towards the company industry. Missed the very first publish? Get caught up here.

The following-generation 5G architecture is made round the realization that different services are consumed differently, by various kinds of users. Thus, next-generation mobile access technology should have:

  1. A method to define individuals variations,
  2. A method to determine and put constraints in order to meet individuals variations, and
  3. A method to architect access techniques that satisfy the goals from the different services that ride on the top from the technology.


It's for this finish the 5G technologies have built-in support for what’s known as “network slicing” - an expensive phrase to state the network is cut up, with every slice configured to meet the requirements of the singular type of service.



Within the 5G architecture, for instance, there's a slice made to deliver common mobile consumer data. This slice delivers high throughput data consumers want use of, which can be such things as pictures, videos, live video interactions, remote mailbox access or remote shared data vault access.

Another slice is made for what's known as “latency critical” applications. Make a connected, self-driving, auto-diagnosing vehicle for the future. The vehicle, linked to 5G, would be the “new cell phone”. It'll instantly make unexpected things happen so the driver can pick not to take care and revel in existence or get work done while commuting. This involves a quick, high-speed, reliable, always-available and latency critical network. The 5G latency-aware slice enables a network design which will make these guarantees. Incidentally, the vehicle is among the numerous such latency-critical applications.

Another slice from the network is made to meet both latency, and also the capacity requirements of the service. Think about the illustration of TeleHealth, a use-situation whereby a clinical company is physically remote in the consumer. Many healthcare situations demand TeleHealth, that has seen only limited realization just because a truly mobile, low-latency and capacity-aware network architecture has continued to be challenging. All TeleHealth use-cases require:

  • Interaction without any frame/audio drops,
  • Atomic guarantees of delivery - if your command was sent, the network must ensure the delivery of this command and also the response back, and
  • Ubiquity - be considered a stranded climber on the remote mountain, or perhaps an inner-city youth who needs the aid of a professional in Mayo Clinic, the network should always exist to aid the service.


This innovative and new world requires a lot of infrastructure. It takes a rise in cell stations, that numerous finish-users is going to be linked to to be able to consume services. It takes compute, storage, as well as networking abilities distributed over the fringe of the network, enabling something delivery platform running both network services and third party application workloads. This edge platform along with differentiated classes and services information provides new methods for Telcos to monetize the infrastructure and charge consumers.

At Dell Technologies, we're centered on creating the perfect infrastructure elements that can help the development of next-generation mobile access systems. Dell EMC servers would be best-in-class and contain the greatest share of the market. Dell EMC storage is second-to-none, while offering all sorts and variations when needed to match the goals associated with a reason for presence inside a 5G network. Dell EMC Networking gear brings it altogether, inside a self-aware, software-defined, declarative manner so the network can adapt rapidly to satisfy the requirements of all of the 5G slices.

Thursday, August 8, 2019

Introducing 3rd Party Software Support

You probably know this, supporting your critical data center infrastructure is difficult to do. Knowing which vendor to make contact with that problem, remembering all of the contact information, websites and much more could be confusing and flat-out frustrating.

It is true, most vendors say they'll provide “multivendor” support - but in fact ultimately you're frequently the main one accountable for engaging using the third party vendor and shutting the situation.

At Dell EMC, we obtain this for this reason we've produced a brand new feature known as third party Software Support, that is incorporated together with your ProSupport Plus for Enterprise support agreement.

With third party Software Support, we'll support any qualified software placed on your Dell EMC system, the best of this is, we'll support it regardless of whether you purchased the program from us or otherwise.  Not simply will we identify the problem, we'll own the problem through resolution. Including software titles from Microsoft®, Red Hat® and VMware®.

The underside-lines are this: regardless of whether you get your software from Dell EMC or you have the program and wish to apply it to our technology, with ProSupport Plus for Enterprise, for those who have an assistance issue, just give us a call. We streamline support through our technology pros who own your situation from first call to resolution.

Tuesday, August 6, 2019

New Server Hits the Machine-Learning Track

The brand new Dell EMC DSS 8440 server accelerates machine learning along with other compute-intensive workloads with the strength of as much as 10 GPUs and-speed I/O with local storage.

As high-performance computing, data analytics and artificial intelligence converge, the popularity toward GPU-faster computing is shifting into high gear. In an indication of this momentum, the TOP500 organization notes that new GPU-faster supercomputers are altering the total amount of turn on the TOP500 list. This observation arrived 2018 whenever a periodic update towards the list discovered that the majority of the new flops originated from GPUs rather of CPUs.[1]

This shift to GPU-faster computing is getting a significant effect on the HPC market. IDC projects the faster server infrastructure market will grow to greater than $25 billion by 2022, using the accelerator portion accounting in excess of 1 / 2 of that volume.[2]

“With AI occurring itself within the datacenter and also the cloud in a phenomenal rate with traditional high-performance computing more and more searching for performance past the CPU, searching for acceleration is warming up, out of the box your competition among vendors that provide acceleration products,” an IDC research manager notes.

Driving faster computing forward


At Dell EMC, the ultimate Scale Infrastructure (ESI) group helps organizations catch the faster-computing wave with a brand new accelerator-enhanced server designed particularly for machine learning applications along with other demanding workloads that need the greatest amounts of computing performance.



This latest 2-socket, 4U server, the Dell EMC DSS 8440 server, has 10 full-height PCIe slots in-front, plus 6 half-height PCIe slots at the spine to produce the best balance of accelerators, launching with 4, 8 or 10 NVIDIA® Tesla® V100 GPUs. Additionally, it incorporates extensive I/O options with as many as 10 drives of local storage (NVMe and SAS/SATA) to supply elevated performance for compute-intensive workloads, for example modeling, simulation and predictive analysis in scientific and engineering environments.

The brand new design enables accelerators, storage and interconnect on a single switch for optimum performance, while supplying the capability and thermals to support future technologies. Offering efficient performance for common frameworks, the DSS 8440 server is fantastic for machine learning training applications, lowering the time that it requires to train machine learning models and time-to-insights. It enables organizations to simply scale acceleration and sources in the pace of the business demands.

An upswing of the new machine


The DSS 8440 was created as a result of customer interest in even greater amounts of acceleration than were formerly provided by Dell EMC, based on Paul Steeves, an item manager for that new server.

As our customers push further ahead with machine learning solutions, it is apparent there was an excuse for elevated levels of faster raw horsepower,” Steeves states. “While faster servers exist from your competitors, a number of our customers want open solutions, with choice not only now, but additionally with time as technology advances.

Additionally, Dell EMC designed the DSS 8440 server particularly with machine learning learning mind, Steeves notes. For instance, the machine includes 10 high-performance local drives and extensive I/O choices to generate a more targeted solution for today’s growing quantity of machine learning workloads.

Key takeaways


  • The DSS 8440 server offers very high amounts of acceleration with as many as 10 NVIDIA V100 GPUs within an open PCIe fabric architecture that enables other open-standard components to become easily put in future versions.
  • The DSS 8440 server offers the raw compute performance that HPC-driven organizations need today, along with the versatility to consider new machine learning technologies because they emerge.


Putting the machine to operate


The DSS 8440 server is made for the difficulties from the complex workloads involved while training machine learning models, including individuals for image recognition, facial recognition and natural language translation.

“It is especially effective for that training of image recognition and object-recognition models, where it performs inside a couple of percentage points from the leading figures - however with an electrical efficiency premium,” Steeves notes.

Another strength from the DSS 8440 server is being able to enable significant multi-tenant abilities.

“With 10 full-height PCIe slots available, customers can assign machine learning or any other compute-intensive tasks to many different instances inside a single box,” Steeves states. “This enables these to readily distribute compute among departments or projects.”

The conclusion


As organizations exercise deeply into machine learning, deep learning applications along with other data- and compute-intensive workloads, they require the strength of accelerators underneath the server hood. The brand new Dell EMC DSS 8440 server meets this need having a versatile balance of accelerators, high-speed I/O and native storage.

Sunday, August 4, 2019

Next Frontier of Opportunity for OEMs: Data Protection

Let's suppose your email provider were built with a major outage as well as your last 48 hrs of emails were lost. Or perhaps your recent radiology scans disappeared since the imaging repository crashed at the hospital.

Startling statistics


Alarming instances of loss of data happen more frequently than we believe, because the annual Global Data Protection Index (GDPI), market research commissioned by Dell Technologies, reveals. The study, which surveyed 2,200 decision makers from organizations with 250  employees across 18 countries and 11 industries, reports some startling statistics:

2.13 terabytes of lost data costs organizations $995,613 during the last 12 several weeks, typically.

Only 33% reported high confidence their organization could be fully cured and meet Service Level Objectives (SLOs) from loss of data.

Within the last 12 several weeks, 76 percent of organizations possessed a disruption, and 27 percent experienced irreparable loss of data, nearly double the amount 14 % in 2016.

Huge chance for OEMs


Exactly what does this suggest for OEMs and application providers? Chance. Data protection differentiates your choices from competitors with premium business value and shields your brand from risk. Further, it may generate an incremental revenue stream by looking into making data protection like a service through the cloud open to your clients.



The actual price of loss of data


OEMs and application proprietors typically have steered obvious of information protection given that they view data his or her customers’ responsibility. This is often shortsighted, particularly as data value and price of information loss grow tremendously. Based on the GDPI, 74% of organizations are monetizing data or purchasing tools to do this. Our prime costs of non-compliance with rules, brand damage from loss of data or cybersecurity attacks, and rapid growth of data-driven decisions, are driving this trend. Also, 96% of organizations that endured loss of data and/or unplanned systems downtime experienced productivity decreases, lack of ability to supply essential services, product/service development delays, and revenue loss, among other outcomes.

Integrated solutions available these days


Another data protection concern for OEMs is complexity. The good thing is it’s simpler than ever before to provide data protection for your customers. No more must you cobble together multiple backup and replication solutions. Dell Technologies, for instance, provides integrated data protection solutions which allow seamless backup, restore, deduplication, and management having a couple of clicks through the cloud, virtualized, physical, or on-premise environments. Dell Technologies OEM  Embedded & Edge Solutions works together with partners to co-develop enhanced data protection services, like the Teradata Backup, Archive and Recovery solution.

Artificial Intelligence and Machine Learning


The GDPI reports that 51% of respondents cannot find appropriate data protection solutions for artificial intelligence (AI) and machine learning (ML). Other emerging technologies customers find it difficult to safeguard include IoT and robotics, amongst others. Again, this presents a large chance to increase the value of your solutions. New use cases and workloads fueled by AI/ML present unique challenges to OEMs.

New data challenges


Large numbers of information, whether it is from on-premises analytics or edge sensors, are needed for ongoing calculations. Formerly this quantity of historic data could be discarded or at the best, archived. In addition to this, these petabytes of information really are a critical a part of your IP and sure the origin of future revenue possibilities.

Everybody concurs that production information is valuable and should be protected but how would you handle this latest data challenge? As enterprise data points expand from data centers to automated oil rigs to robotics-driven factories to sensor-outfitted stores and beyond, your clients require multi-pronged data protection solutions that actually work seamlessly using their applications and core, cloud, and edge environments.

Sizing the chance


To assist assess this chance, we recommend installing the GDPI for any detailed view, including global and regional infographics. Go to the data protection calculator to determine where your clients stand when compared with other firms.

Making data protection a principal solution design consideration, just like you treat storage, servers, as well as networking, is really a high-reward chance to provide differentiated value, create more revenue, which help your clients grow and succeed.

Friday, August 2, 2019

Bringing Simplicity to a Complex World – No Easy Task

I possibly could not agree more with this particular quote in the Nederlander essayist and pioneer in computing science, Edsger Dijkstra - renowned for his creates algorithms in the ‘60s towards the ‘80s. For that 30  years I've been employed in the IT industry, I've observed by using every new hype comes the commitment of a complexity killer whereas, actually, the brand new trend frequently creates more data silos to deal with, a minimum of for any transition period.

The current example is cloud-computing, whose scalable pay-per-use model may bring real versatility benefits of users, whilst generating infrastructure chaos if there's no integrated multi-cloud management means to fix bring consistency between private clouds, public clouds as well as on-premise datacenters. 93% of companies use several cloud. They require a unifying partner to assist them to manage this complexity - connecting teams and procedures across different platforms. Dell Technologies offers services, solutions and infrastructure to attain consistency inside a multi-cloud world and eliminate obstacles.



Like a CFO, I contemplate it a part of my pursuit to fight unnecessary complexity, whenever I'm able to. I share this opinion by Jim Bell, an old CFO switched Chief executive officer, that complexity may be the enemy of agility which some degree of automation (through selected RPA technologies, for example) might help make such things as planning and forecasting simpler at a time where information mill increasingly more data-driven.

Now, how can you take all of the noise away and make certain you concentrate on tools and knowledge that actually bring some roi towards the business?

  • I believe the very first milestone on the path to simplicity would be to create and apply metrics that integrate user-ambiance when attempting to calculate productivity gains produced by a bit of software or perhaps an application. Dare to question (pilot) users around the time they have to get through the answer. How simple will they think it is? Will they read the efficiency gains the sales repetition convinced you of? Will they see room for enhancements that will make their lives much simpler?
  • Next, when moving out a brand new solution, set the best framework round the project. By ‘right’, I am talking about a steering committee, for example, which has the legal right to take (drastic) corrective action immediately. Concretely, make certain you've got a good balance for the reason that decision body between ‘subject matter experts’ and ‘outsiders’ allowing you to have different perspectives on which is complex or otherwise. In almost any situation, you'll need mavericks which will challenge the projects around the simplicity/user-ambiance side. The profile from the ‘maverick’ is determined by the kind of project. For example, in an exceedingly process-driven accounting project, it's interesting to possess someone having a creative personality to trace the simplicity from the project, in conjunction with more system-driven kinds of person.
  • My third tip would be to learn and share training of all the IT project to ensure that each project is really a advance with an improvement path towards greater efficiency. For example, each year in The month of january, I put ‘simplifying the complex’ on my small listing of priorities to go over using the team, according to what we should learnt in the past year.
  • Finally, I believe fighting complexity frequently comes lower to altering (bad) habits - we've always labored this way so it's most likely the best. I'm believing that simplicity starts with the proper mindset - the capability to challenge things and become available to change. Why must we continue with complex processes should there be simpler alternatives? It's a mindset that needs to be encouraged at work, certainly towards newcomers that don't have a biased view yet.


Inside a recent podcast around the evolution from the CFO, McKinsey consultants make reference to the finance function and also the CFO like a talent factory which must flex different muscles to draw in, retain and drive talent moving forward. I'm believing that the opportunity to bring more clearness in stuff that are usually untidy is among these key muscles.

Wednesday, July 31, 2019

Why Open Innovation is Critical in the Data Era: Tech Breakthroughs Don’t Happen in a Vacuum

In 2003, Dr. Henry Chesbrough printed a paper that challenged organizations they are driving new technology breakthroughs outdoors that belongs to them four walls, and together with customers and partners to have an outdoors view. The approach, open innovation, follows a framework and procedure that encourages technologists to talk about suggestions to solve actual challenges.

I loved it. It had been fast, yet practical. It had been conceptual, but grounded in tangible-world challenges that people could solve. Time and sources committed to innovation delivered better outcomes since it was created with customers and partners.

For me personally, open innovation is core to how my teams have fostered new technology breakthroughs and patents that will get recognized in tangible-world use cases. It’s an archetype which has proven effective for Dell Technologies, particularly as our customers turn to modernize their IT infrastructure in their digital transformation.



Four tenets govern open innovation: collaboration, “open” anyway, rapid prototyping, along with a obvious road to commercialization. Our innovation teams have accepted this method, developing new solutions alongside our customers and partners in line with the realities from the market landscape within the next 3 to 5 years. It’s a thoughtful mixture of academic research, advanced internal technology, and developments from round the technology ecosystem.

Each engagement outlines problem statements and also the many training learned from previous projects, and uses numerous internal and exterior sources from around the globe to collaborate and ideate. Inside a couple of short days, we develop and test prototypes and proofs-of-concept iterated inside a real-world atmosphere. This provides us the chance to understand critical training where we have to innovate around roadblocks, having a objective of designing an answer that’s incubated and integrated within 12-18 several weeks, and primed to resolve the difficulties that lie ahead.

For instance, we’ve labored with providers to succeed cloud-based storage container innovation designed particularly for IoT and mobile application strategies, lounging the research to have an IT infrastructure that may evolve rapidly to handle amount of data which was then anticipated from 5G deployments and edge devices - happening today.

The scope of innovation projects going ahead today continues to pay attention to the way we drive more quality from the exponential data caused by more connected devices, systems, and services in the edge. IDC forecasts that by 2025, the worldwide data-sphere will grow to 175 zettabytes, 175×1021 or 175 Billion 1TB drives.[1] Dell Technologies Vice Chairman, Shaun Clarke, lately put that into context throughout the keynote at Dell Technologies World - that’s greater than 13 Empire Condition building packed with data head to feet! A lot of which will happen in the Edge. The Advantage computing marketplace is likely to grow 30% by 2022.[2]

All that data can drive better outcomes, processes not to mention, new technology that may be the following major industry disruption and breakthrough. The answer word is potential - they are challenges that need innovation not to simply take action, but make sure that solution could be deployed and commercialized. With the open innovation approach, we’re collaborating with customers and partners to satisfy the brand new demands from the “Data Era,” and making certain that the information, wherever it lives, has been preserved, mobilized, examined and activated to ultimately, deliver intelligent insights.

Open innovation enables us to become pioneers in software-defined solutions and systems that may scale to handle the increase of information and be sure they evolve with new software and application updates - and unlock our customers’ data capital.

For example, we’re dealing with the world’s largest auto manufacturers to construct their edge infrastructures and knowledge management abilities to aid huge fleets of autonomous cars! Through innovation sprints and collaboration, we’ve had the ability to understand what’s needed for data to operate in tangible-time in the vehicle level, driving intelligence and automation through AI/ML, whilst making certain data management within the cloud and knowledge center is outfitted to deal with Zettabytes of information. It’s our view the infrastructure powering the way forward for smart mobility would be the first private ZetaScale systems on the planet, and Dell belongs to the main journey to create that the reality.

We’ve partnered with customers in retail to build up intelligent software-defined storage solutions that support integrated artificial intelligence (AI) and machine learning (ML). This automates software updates, which could frequently zap productivity from this teams. Using software-based storage choices provisioned through automation, IT teams are now able to develop data-driven business applications that deliver better customer encounters.

We’re also ongoing our use providers and enterprises to construct the advantage infrastructure needed for 5G. For instance, we’re dealing with Orange on specific solutions that appear to be at just how AI/ML can manage edge environments. Simultaneously, we’re helping providers evolve their multi-cloud strategy to allow them to seamlessly manage and operate a number of clouds which exist in public places cloud domains, on-premises for faster access and more powerful security, and clouds in the edge that assist them to manage data within the moment.

In my opinion, innovation with “open” collaborative frameworks and procedures delivers practical yet incredibly significant fast innovation across any industry. You cannot advance human progress through technology whether it can’t enter into the marketplace to provide real leading-edge methods to problems not formerly solved. The only greatest challenge before our customers is the chance of being disrupted with a digital form of their business that may better exploit technology innovation. For this reason goal to work with our people to innovate as fast as possible through open innovation - making certain our customers could possibly be the disrupters - and not the disrupted.