Making It in Multichannel E-Commerce

Making It in Multichannel E-Commerce

Contemporary e-commerce means selling through a variety of channels beyond traditional company-owned websites — including a plethora of social media, mobile and other platforms. Multichannel selling quickly has become the norm, and consumers increasingly expect to be able to shop wherever they happen to be in the digital world.

“Today, retailers have countless options of channels to choose from,” said Marcel Hollerbach, CMO of Productsup.

“There are hosted online stores, big marketplaces like Amazon or Walmart where third-party sellers list their items, and social media platforms like Instagram or Pinterest that have recently added shopping features. If you’re listing your products on any of these channels at the same time, you’re partaking in multichannel e-commerce,” he told the E-Commerce Times.

The ultimate goal of multichannel e-commerce is to sell more products. Simply put, the more places where consumers can find a company’s products, the more products that company can sell.

“There are many benefits to selling on multiple channels, but the main goal for companies is to increase sales,” said Nick Maglosky, CEO of Ecomdash.

“Being able to have your products in a place where your target audience shops increases the chances of someone buying your products,” he told the E-Commerce Times. “Each individual consumer has his or her own appreciation for shopping on any given channel. Some shoppers only like to buy from Amazon because of the Prime shipping benefits, while other shoppers like purchasing directly from a retailer because it has that ‘support the local/small business’ feeling. The main goal for the company is to meet its target audience’s shopping desires by selling on the platforms they prefer.”

Multichannel Benefits

Multichannel e-commerce offers many benefits for consumers. They’re much more likely to find products they want and need if those products can be obtained in the places they’re already hanging out.

“The influx of online channels that have emerged over the last decade creates tremendous shopping opportunities,” observed Hollerbach. “Making an online purchase can be risky without the ability to see it in person and try it on for size, so being able to look at a product on multiple sites and read various reviews helps you to make smart purchasing decisions.”

Consumers also respond well to multichannel e-commerce because it’s easier on the wallet.

“Take Amazon Prime Day, for example,” said Hollerbach. “Multiple channels, like Amazon and Target, had different deals over the two-day event. Consumers could search for the products they wanted across various sites to find the lowest listing.”

Multichannel selling offers e-commerce companies a host of benefits, as well — from increased sales to a broader reach.

“Retailers want to be where consumers are, and today consumers shop across several channels before they make a purchase,” noted Hollerbach. “To follow consumers in their shopping journey, retailers need to be at as many of those touchpoints as possible. The more visibility consumers have to your products while they’re bouncing from channel to channel, the more likely they are to buy your product. Not only does having a strong multichannel e-commerce strategy drive more sales from recurring customers, but it also expands your reach to new customers.”

Multichannel Strategies

Deciding on the proper channels is a major part of being a successful multichannel retailer — and the process of deciding where to be must be ongoing, since new channels open up every day.

“The first thing retailers looking to expand to more channels need to do is identify the channels that will be most profitable for them,” said Hollerbach. “The number of options to choose from can be overwhelming, so starting with a clear understanding of your target customers, niche, and inventory logistics will help determine the best possible channels to sell on.”

Knowing the specific characteristics of each channel also helps businesses formulate a plan for exactly how to market their products in those spaces.

“The channels you choose will also determine your listing strategy. Customers go to certain channels to find specific items or information, so setting up your listings to reflect the channel’s aesthetic and purpose is important to attracting shoppers,” explained Hollerbach. “For example, you’ll want to include the craft behind the manufacturer for a listing on Etsy, whereas on Instagram Shopping, images of the product in use are more important than the product description.”

To be a successful multichannel retailer, it’s vital to have a clear plan in place not just for sales, but also for inventory management and shipping.

“The most common mistake we’ve seen is when e-commerce businesses try to scale across multiple channels without first having a solid process in place for how their inventory will be organized and placed in their warehouse, how they will they sync their items across the various channels to avoid oversells and out-of-stock situations, and how their pick-and-pack process will work,” said Nizar Noorani CEO of SellerChamp.

“Without these three key elements in place, the entire operation goes haphazard,” he told the E-Commerce Times.

The right pricing strategy also is key to successful multichannel sales.

“The other issue we’ve seen often is not pricing your products to cater to each selling channel,” noted Noorani. “Both of these issues can be easily avoided through the use of technology. A software platform like SellerChamp — which enables you to list your products across multiple channels, enables you to store its location in your warehouse, syncs your quantities across the channels where you sell, and reprices your items continuously across all your channels — is key to avoiding these mistakes and to running a successful multichannel operation.”

Above all, it’s important for brands to tailor their sales strategies for each channel — and for the people who frequent that channel.

“Often brands give the same message and collateral to each channel,” said Paul Savage, vice president at Core dna.

“Tailor how you present your brand per channel. Shoppers at Walmart and Amazon may have different reasons for purchasing your items,” he told the E-Commerce Times. “Highlight that in your product information.”

A Multichannel Future

Multichannel selling is becoming the norm in the world of e-commerce.

“As technology evolves, the distinction between online and offline for consumers will disappear,” said Anthony Payne, vice president of global marketing at Brightpearl.

“Services like Buy Online Pick Up In Store (BOPIS) and endless aisle will become more mainstream as retailers — both big box players and independents — respond to these changes and look to introduce accessible technologies to provide more cross-channel choice and convenience,” he told the E-Commerce Times.

The channels available for selling are undergoing constant changes, and the landscape will continue to be in flux, with borders between content and sales blurring.

“There are a few things that we see as part of the evolution of multichannel,” said Michael Ugino, director of product marketing, multi-channel commerce, atGoDaddy.

“For a while now, we’ve been waiting to see when and how Facebook and Instagram will finally realize their commerce potential,” he told the E-Commerce Times.

“The first glimpse of this may be the advancement of consumer engagement platforms like Google as shopping avenues. While most shoppers are familiar with the experience of seeing an advertisement and clicking through to a shopping site, marketplace or landing page, Google will soon provide an interesting opportunity for the ad to actually be the purchase medium, with the shopper never proceeding beyond to a checkout destination,” Ugino pointed out.

“For the merchant,” he added, “this means changes in how to optimize a channel that’s no longer as tangible and workable as those in which they traditionally operate.”

Security Pros: Be on High Alert for Certificate Changes

They say that the key to good security is constant vigilance. As a practical matter, this means that it’s important for security and network pros to pay attention to two things: changes in the threat landscape, so they can be on the alert for how their systems might be attacked; and changes and developments in the technologies they employ.

It’s in light of the second part — paying attention to changes in the underlying technology — that I want to call attention to a potential change that’s under discussion right now. It’s a change that may not seem overly significant on the surface, but that has potential long-term consequences lurking below the surface.

These consequences matter quite a bit. If they’re not planned for, these changes can lead to dozens of wasted hours spent looking for difficult-to-debug application failures, potential service interruptions, or other impacts that may not be apparent when viewed cursorily. I’m referring here to proposed changes under discussion related to the lifetime of X.509 certificates used for TLS/SSL sessions.

So what’s going on with certificates? The backstory is that Google made a proposal at the June CA/B Forum to shorten the lifespan of X.509 certificates used in TLS once again, to just over one year (397 days).

The CA/Browser Forum (CA/B Forum) is a consortium of PKI industry stakeholders: certificate authorities (the organizations that actually issue certificates), and relying parties (software manufacturers, such as browser vendors, that rely on the certificates being issued). Its mandate is to establish security practices and standards around the public PKI ecosystem.

The current two-year standard (825 days) for maximum certificate lifetime, set in March 2018, was shortened from a prior three-year (39 months) lifetime. This time, the forum is revisiting the one-year proposal. As was the case last time, there has been some natural pushback from certificate authorities in the business of actually issuing the certificates involved.

Who Cares How Long the Lifetime Is Anyway?

The fact of the matter is that there are some good arguments to be made on both sides of the certificate lifespan fence, both supportive and critical of shortening the maximum certificate lifespan.

First, there is the issue of certificate revocation. Specifically, it is the responsibility of those relying on certificate validity (for most use cases, this means browsers like Chrome, Edge and Firefox) to ensure that revocation status for certificates is checked appropriately. This is the kind of thing that sounds easy to do until you think through the full scope of what it entails.

For example, it’s not just browsers that have to implement validity checking. So do software libraries (e.g. OpenSSL, wolfSSL), operating system implementations (e.g. CAPI/CNG), implementations like CASB products or other monitoring products that seek to perform HTTPS Interception, and a bunch of others.

As one might suspect, given the complexity, not every implementation does this well or as thoroughly as is desirable (as noted in US-CERT’s technical bulletin on the topic of HTTPS Interception). Having a shorter lifespan means that there is a reduced ceiling of how long a revoked certificate can remain in use even if an implementation doesn’t check revocation status.

On the other hand, keep in mind that most new applications rely heavily on Web services as a key method of operation. It’s not just browsers and associated products that rely on certificates, but increasingly it’s also applications themselves.

This in turn means that when certificates expire, it not only can have a negative impact on the user interface experience for those seeking to access websites, but also can cause applications to fail when critical Web services, such as those on the server end of RESTful APIs (where business logic actually is implemented.) They can’t establish a secure channel and thereby fail. In this case, certificate expiration can cause the application to fail unexpectedly — “it worked yesterday but doesn’t work now” — in a difficult-to-debug kind of way.

There’s a tradeoff, no matter how you slice it, from an end-user practitioner viewpoint. A shorter lifespan potentially can help alleviate problems resulting from failure to properly implement revocation checking, but at the same time can lead to application complexity in situations where certificate expiration status is not tracked rigorously. Note that this is in addition to the arguments made for and against by CA, browser developers, and other stakeholders in the CA/B Forum.

What Security Practitioners Can Do

Regardless of where you fall on the spectrum of for/against this particular change, there are a few things that practitioners can and should do to ensure that their houses stay in order. First of all, there arguably would be less need to look for alternative strategies to limit exposure from revoked certificates if everybody did a better job of validating revocation status in the first place.

If you’re using a product like a CASB (or other interception-based monitoring tool), if you’re developing applications that employ TLS-enabled RESTful APIs, using reverse proxies, or otherwise handling the client side of TLS sessions, it’s a must-do to ensure that revocation status checking is performed and performed accurately.

This is a good idea regardless, but the fact that those in the know are pushing this change suggests that the problem may be worse than you might think.

Second, keep track of the expiration of certificates in your environment. Ideally, keep a record of who issued them, when they expire, along with a contact point for each one (someone to hassle in the event that it expires).

If you can, routinely canvass the environment for new TLS-enabled listeners that you don’t expect. If you have budget to invest, there are commercial products that do this. If not, you can get information about certificate expiration from vulnerability scan results.

Worst case, a script to systematically trawl an IP address range looking for TLS servers (and recording the certificate details including expiration) isn’t that hard to write using a tool like OpenSSL’s “s_client” interface or the “ssl-cert” option in nmap. Again, this is useful to do anyway, but if the lifespan gets shorter going forward, it will provide more value.

By taking some time and doing a bit of planning now, you can make sure your environment stays optimally positioned, regardless of which way the powers that be ultimately decide to go. Since these measures are prudent anyway, even if the outcome is no shortening of the expiration lifespan, you still derive value from having implemented them.

Major Browsers Block Kazakhstan Government’s Fake Safety Cert

Google, Mozilla and Apple on Wednesday blocked a fake root certificate issued by Kazakhstan’s government to spy on its citizens’ online activities.

The government instructed citizens to install the certificate on all of their devices, and it provided separate instructions for Android, iOS, Chrome, Firefox, and Internet Explorer Web browsers, according to F5 Labs.

When those who installed the certificate attempt to access website using Chrome, Firefox or Safari, they now will see an error message stating that the “Qaznet Trust Network” certificate should not be trusted.

Google has added the certificate to CRLSet and will block it in other Chromium-based browsers, according to Andrew Whalley, Chrome Security.

“We believe this is the appropriate response because users in Kazakhstan are not being given a meaningful choice over whether to install the certificate and because this attack undermines the integrity of a critical network security mechanism,” said Mozilla Certification Authority Program Manager Wayne Thayer.

Apple reportedly also has taken action to ensure Safari does not trust the certificate.

Redmond Silent

Microsoft has not said anything publicly about the issue.

“The Certificate Authority in question is not a trusted CA in our Trusted Root Program,” a Microsoft spokesperson said in a statement provided to TechNewsWorld by company rep Katie Schick.

Microsoft “likely has a number of large contracts with the government, and they are typically far more exposed if a government wants to go after them, so they tend to be far more cautious,” suggested Rob Enderle, principal analyst at the Enderle Group.

Apple and Google do not have much of a presence in government, he told TechNewsWorld.

Good Intentions?

The fake root certificate let the Kazakhstan government access citizens’ online traffic, circumventing encryption, through a man-in-the-middle (MITM) attack.

The fake certificate decrypts traffic and encrypts it with its own key before forwarding the traffic to its destination, Censored Planet found.

The aim was to protect Kazakhstan’s users from cyberthreats, according to government officials.

The fake certificate has to be installed manually because browsers do not trust it by default.

Censored Planet first observed the interception of online traffic through the certificate’s mechanism July 17 and began tracking it July 20. The interception was not continuous, starting and stopping several times.

Detecting the Attack

Censored Planet detected the attack using a technique called “HyperQuack,” which involves connecting to TLS servers and sending handshakes that contain potentially censored domains in the server name indication (SNI) extension.

If the response differs from a normal handshake response, the domain is marked as potentially censored.

At least 37 domains were affected:

Connections were intercepted only if they followed a network path that passed the interception system, Censored Planet found.

However, interception occurred regardless of the direction the connection took along the path. That allowed interception behavior to be triggered from outside Kazakhstan by making connections to TLS servers inside the country.

Tempest in Teacup?

Censored Planet has two virtual private server (VPS) clients within Kazakhstan. They were able to access affected sites without any HTTPS interception, suggesting it was not universal.

Many clients do not receive the injected certificate even when connecting to domains known to be affected, the organization pointed out.

Certificates were found injected in about 1,600 of more than 6,700 TLS hosts accessed through one of Censored Planet’s VPS clients, and only 459 of the TLS hosts when accessed from the United States.

Kazakhstan’s government earlier this month said that a new security system being tested caused interruptions to Internet access for residents of the nation’s capital of Nur-Sultan.

One third of all traffic in the city was inspected, the government said, adding that the tests were complete and citizens who had installed the National Certificate could delete it. Citizens would have to install it again if required.

The path to all the 1,600 servers passed through AS 9198 — Kazakhtelecom, which holds a de facto monopoly on backbone infrastructure, and established Kazakhstan’s Internet Exchange Point — a peering center for domestic traffic, according to Freedom House.

If at First You Don’t Succeed

The Kazakhstan government first tried to launch a fake CA attack in 2015.

It applied to become a trusted Certificate Authority (CA) in the Mozilla program, but the request was denied because Mozilla had evidence the government planned to intercept traffic by forcing users to install the root certificate in the bug.

The latest attack used a different bug. Kazakhstan described the attack as a test of its cybersystems.

Mozilla blocked the Qaznet certificate because some users already had installed it, and because the organization considered it likely that the government might rely on it again in the future.

If the government switches to a new certificate, Mozilla promised to take similar action to protect the security and privacy of Firefox users.

Browser makers previously have blocked digital certificates. In 2015, Google and Mozilla blocked all new digital certificates the China Internet Network Information Center (CNNIC) issued after a threshold date.

They took that action in response to unauthorized credentials issued for Gmail and other Google domains.

However, Microsoft restricted itself to issuing a security update, and Apple did not take any action against CNNIC.

Fighting Cybercrime: Cybersecurity and Digital Forensics Are the New A-Team

Cybersecurity and digital forensics are instrumental in creating effective defense, analysis and investigation of cybercrime. While both focus on the protection of digital assets, they come at it from two different angles.

Digital forensics deals with the aftermath of the incident in an investigatory role, whereas, cybersecurity is more focused on the prevention and detection of attacks and the design of secure systems.

Think of the cybersecurity expert as the frontline police officer and the SWAT response team all in one. The digital forensics expert is the specialist investigator that hunts the perpetrator and seeks to understand their motivations.

Let’s see how the two practices complement each other to stop malicious attacks and track down the criminals involved.

What Does a Digital Forensics Professional Do?

The practice of digital forensics includes the collection, examination, analysis and reporting of incidents involving, computer, network and mobile devices. Digital forensics professionals work across both the public and private sectors, and their role usually involves:

The end goal of a digital-forensics investigator is to identify the perpetrator of a cybercrime, obtain hard evidence against the perpetrator, and for that evidence to be admissible in a court of law.

Case Study: Digital Forensics Helps Solve Cyber Espionage

In 2008, the worst cyberattack in US military history saw an unprecedented amount of classified military data to fall into foreign hands. Unprepared for an attack that originated inside their own network, the Pentagon deployed digital forensics investigators to determine the source of the attack and how the breach occurred.

Their investigative work pinpointed the breach to a US military base in the Middle East. The cause was a USB flash drive inserted by one of their own personnel inside the military’s computer network — thus bypassing all of the security countermeasures their cybersecurity team had built (e.g. firewalls).

Further investigation found that the individual was not a double agent working within the US Military, but a naive staffer who thought they had found a free flash drive. They had unsuspectingly picked it up in a car park outside the military base, where hundreds of flash drives containing the malware had been scattered. The cybercriminal who planted them only needed one unsuspecting person to pick one up and use it on their computer.

Cyber-forensic professionals — working with cybersecurity experts — played a crucial role in determining the source of the breach and in-turn, putting measures in place to ensure such a breach doesn’t occur again.

The work of a cyber-forensic professional can lead to people and places outside of the digital realm. This attack changed the entire course of the US military strategy towards cybersecurity and cyberwarfare, resulting in a new department of cybersecurity professionals and forensic investigators being created to defend, attack and hunt cybercriminals.

Digital Forensics and Cybersecurity in Action

In the wake of the drone scare at Gatwick airport in the UK, cybersecurity students at Edith Cowan University have been developing a system that automatically tracks and disables rogue drones, while also tracking down their owners. The internship program, named Spectrum Watch, can isolate data traffic being sent to the drone. This means cybersecurity agents can take control of the drone, lowering its descent and minimising the threat it presents. By preserving the drone — as opposed to simply destroying it — digital forensics investigators can analyze it and extract information about the drone’s origin, flight path and access any images or video recorded by the drone.

If you are interested in a challenging career in cybersecurity, you can study online and gain your Masters in Cyber Security from Edith Cowan University, studying digital forensics as a core unit of your degree.

About This Content
This content is provided in collaboration with Edith Cowan University. It may have been influenced by the sponsor and does not necessarily reflect the views of the ECT News Network editorial staff.

Facebook Gives Privacy-Minded Users Some Control Over Activity Tracking

Facebook on Tuesday announced the release of Off-Facebook Activity, a tool that will let members see which apps and websites supply information about their online activity, and clear that information from their Facebook accounts if they wish.

It will roll out initially to members in Ireland, South Korea and Spain.

Once members have cleared their off-Facebook activity, Facebook will remove their identifying information from the data it gets from the apps and websites they visit.

Facebook then will not know which websites members visited or what they did there. It will not use any of the disconnected data to target ads to members on Facebook, Instagram or Messenger.

Members also can choose to disconnect future off-Facebook activity from their accounts — either all off-Facebook activity, or activity limited to specific apps and websites.

Off-Facebook activity tool

“Given that the average person with a smartphone has more than 80 apps and uses about 40 of them every month, it can be really difficult for people to keep track of who has information about them and what it’s used for,” said Erin Egan, chief privacy officer, policy, and David Baser, director of product management, in a post announcing the new tool.

Off-Facebook Activity provides consumers with safety, security, knowledge, transparency and control, and is also an educational tool, said Randall Rothenberg, CEO of IAB (Interactive Advertising Bureau).

No Respite From Ads

Facebook will control more than 20 percent of the worldwide digital ad market this year, which will exceed US$67 billion, according to eMarketer.

That’s not likely to be affected by the Off-Facebook Activity tool, which “seems designed to prevent any major shift in [Facebook’s business model] or the revenue stream,” said Nicole France, principal analyst at Constellation Research.

The only difference Facebook members will see after activating Off-Facebook Activity is that the ads they receive will be less targeted, Egan and Baser noted. They will still see the same number of ads.

Facebook will continue to use the data it gets about members, but that data will not be linked to individuals.

“Many apps and websites are free because they’re supported by online advertising,” Egan and Baser pointed out, “and to reach people who are more likely to care about what they are selling, businesses often share data about people’s interactions on their websites with ad platforms and other services. This is how much of the Internet works.”

Several months into the development of the feature, “people asked for a way to disconnect future online activity from individual businesses — not just all at once,” the executives wrote.”We also heard from privacy experts that it was important to be able to reconnect a specific app or website while keeping other future activity turned off.”

This is in line with the results of an online survey YouGov conducted this spring. Nearly 1,400 of the 2,500-plus adult American participants had installed an ad blocker on their digital devices.

Some consumers, dubbed “ad filterers,” accept certain ads as long as they are not intrusive, noted Ben Williams, director of advocacy at survey sponsor Eyeo, maker of Adblock Plus.

Ad blocking began “as an all-or-nothing proposition,” he said, but “ad filtering, whether accomplished through an ad blocker or even directly through the browser, is growing in popularity.”

Staying Ahead of Regulators

“The Off-Facebook Activity tool is the latest move from Facebook to become more transparent and give more control to consumers over their data,” said Jasmine Enberg, social media analyst at eMarketer.

It is “also likely an effort to stay one step ahead of regulators, in the U.S. and abroad, that are cracking down on Facebook’s ad targeting practices,” she told the E-Commerce Times.

The rollout of the new tool follows Facebook’s announcement last month that it would give users more detailed reasons for ads being shown them, as well as update Ad Preferences to tell them more about businesses providing information about them.

Data processors like Facebook “need to make sure that collection, use and disclosure are fully comprehended before they occur,” said Steve Wilson, principal analyst at Constellation Research.

“That’s privacy,” he told the E-Commerce Times.

Facebook’s ad targeting practices, which long have been the focus of complaints by privacy advocates, shot to prominence last year because of the Cambridge Analytica scandal.

In response, Facebook CEO Mark announced plans to build Clear History, a tool that would let members clear their browsing history. It appears Clear History has morphed into the Off-Facebook Activity feature.

Germany’s national competition regulator, the Bundeskartellamt, earlier this year ordered Facebook to stop combining user data from different sources without consumers’ consent.

Not Quite Good Enough

Off-Facebook Activity is opt-out rather than opt-in, which “puts the onus on individual users to go through every app and website and try to make an informed decision about what they want to keep and what they don’t,” Constellation’s France told the E-Commerce Times.

“This flies in the face of any assertions that Facebook is built on user privacy,” she said.

“Instead, this seems to me like a not-so-subtle way of building in enough friction to assure that most people never bother to turn off any data sharing,” France contended. “Just as the vast majority of users never bother to customize, say, the look and feel of their email inbox, the majority of Facebook users are unlikely to bother with something that requires this much effort.”

Yubico Offers Dual Lightning, USB-C Dongle to Secure Devices

Owners of iPhones looking for an extra measure of protection when using applications and logging into websites can get it with a new dongle from Yubico, a maker of hardware authentication security keys based in Palo Alto, California.

Its new YubiKey 5Ci, which retails for US$70, supports both USB-C and Apple’s Lightning connectors on a single device. The dual connectors can give security-conscious consumers and enterprise users strong hardware-backed authentication across iOS, Android, macOS and Windows devices.

“Before this key, it was really hard for a user to try and authenticate with a security key across multiple devices,” said Yubico Chief Solutions Officer Jerrod Chong.

“This key has USB-C on one side and Lightning on the other so a user can authenticate to all the devices that they have,” he told TechNewsWorld.

That can improve security across the board, because people no longer need to use weak substitutes for the strong protection a hardware key can provide.

“People are using SMS or one-time pass codes delivered through email, which are not only bad from a security perspective but bad from a usability perspective because you’re typing in codes that you can get wrong,” Chong explained.

“We wanted to make the process simple,” he continued. “You plug in this device, touch a button, and you’re good to go.”

Password Manager Support

The YubiKey 5Ci supports a number of Apple iOS applications out of the box. They include several popular password managers — 1Password, Bitwarden, Dashlane and LastPass.

Idaptive, a single sign-on app for enterprise users, also is supported. Single sign-on apps are used for secure access to corporate clouds from mobile devices.

Some other enterprise applications include YubiKey 5Ci support in their developer kits. They include Okta, XTN and Monkton Rebar.

Authentication keys have a greater appeal to enterprises right now than they do for consumers, Chong acknowledged, but “we will see that change as we get more browser support and more consumer applications enabled for authentication keys.”

To understand why enterprises are keen on authentication keys, all you need to do is look at Google’s experience with them. Since handing out the keys to its more than 85,000 employees in early 2017, it hasn’t had a single successful phishing attack on any of its workers’ accounts.

There have been no account takeovers since Google implemented security keys, a Google spokesperson told security blogger Brian Krebs in July 2018.

No iPad Pro Support

The iOS version of the Brave browser also supports the YubiKey 5Ci. In fact, Brave is the only browser to support WebAuthn via Lightning connector. WebAuthn is an API that allows websites to offer a variety of authenticators to their visitors, including keys and biometric readers. Websites accessible through Brave include Bitbucket.org, GitHub.com, Login.gov, Twitter.com and 1Password.com.

Although the YubiKey 5Ci is compatible with iPhones and iPads with Lightning connectors, it doesn’t work with iPads that have USB-C connectors, even though the plug fits.

Apple limits accessibility through the USB-C port on its iPad Pro models, Chong explained.

“It’s not just a problem with us,” he said. “It’s a problem for anyone that wants to create an accessory with a USB-C connection to an iPad.”

That may change when the new iPadOS is released in the fall.

The YubiKey 5Ci also doesn’t work with FIDO-compliant services or apps out of the box. That’s because iOS doesn’t support FIDO. FIDO is a set of open source security specifications for strong online authentication.

“Apple may be prioritizing other security measures that may have a broader relevance for its consumers,” said Ross Rubin, principal analyst at Reticle Research, a consumer technology advisory firm in New York City.

“It’s also highly likely that if Apple were to support an authentication token for iOS devices, it would be one that they would offer,” he told TechNewsWorld.

Life Left in Lightning

Supporting the Lightning connector opens up the iOS market to Yubico, Rubin noted.

“For some time after supporting USB-C on the iPad, there was a sense that the Lightning connector was on borrowed time,” he said, “but Apple continues to roll out new products with it, and the rumors are Apple will stick with it through the next round of iPhones.”

While most authentication keys are sold in the enterprise, there are consumer niches where they are popular.

“Anyone concerned about being hacked, like journalists or celebrities, use them,” Rubin said.

“Hard-core gamers, too, concerned about other players hacking into their accounts, use this kind of security token to provide an extra level of protection against that kind of attack,” he noted.

“More and more organizations are recommending two-factor authentication, and this can be a way to achieve it relatively seamlessly and with more security than text messages,” observed Rubin.

A problem with hardware solutions like the YubiKey 5Ci is they can be inconvenient. They’re something else to keep tabs on when trying to navigate the Net or, worse, something that can be lost, creating a whole new crop of headaches.

“The tradeoff between security and convenience is a classic conflict. The trick is to find that balance between maximizing security while minimizing inconvenience,” Rubin said.”Something like this certainly could be more convenient than memorizing hard-to-guess passwords used for different services.”

Room for Everyone

Although authentication keys have been linked to the end of passwords, password managers — software used to store logins, create hard-to-guess passwords and automatically access websites — ironically could boost acceptance of something like the YubiKey 5Ci.

“Password managers have a low barrier to entry. In some cases, you can try them for free,” Rubin pointed out.

“Then, once you realize the benefits of better security, you might want to go to the next step and buy one of these devices,” he said.

“Passwords are here to stay, and so are password managers, but passwords aren’t the only game in town and haven’t been for a while,” noted Simon Davis, marketing vice president at Fairfax, Virginia-based Siber Systems, maker of the RoboForm password manager.

“When logging in to sites and apps, people are looking for security and ease of use,” he told TechNewsWorld. “Any product that helps them achieve both will always be sought after.”

Taking the Leap With B2B Customer Experience Tech

The business-to-consumer shopper experience has undergone a seismic shift in the past few years, thanks in large part to new technology. Improved personalization enables consumers faced with a vast field of purchase options to select the brand that best meets their needs and desires.

Business-to-business buyers rarely enjoy the same rich buying experiences. While 90 percent of B2B leaders say customer experience is key to their companies’ strategic priorities, 72 percent have no direct influence over it. At the same time, only 20 percent of B2B companies can be described as “masters” of CX — those who prioritize consumers’ experiences and achieve strong financial results.

It’s time for B2B leaders to take a more active role in their companies’ CX decision making. The ability to deliver a modern, effective experience that meets buyers’ expectations has evolved into a key component of the B2B model. Executives need to step in and ensure that the right tools — including customer relationship management and configure, price, quote software — are in place.

While digital tools represent a big step forward, technology is just a bandage for companies with a larger customer experience problem. B2B leaders striving for true long-term change also must promote a cultural and operational sea change to realign the business as a customer-centric organization. The answers to several key questions will help guide B2B executives on that journey.

Culture Shift

How can an organization shift its culture to support transformation tech like CRM or CPQ?

Unlike most B2C companies, manufacturing businesses typically are led by engineers with strong science, technology and math backgrounds. Their experience and education allow B2B companies to produce superior products to solve specific problems. However, that type of background often makes it harder for them to focus on the customer experience side of the equation.

In our modern economy of choice, producing the best product no longer is a sufficient sales argument. B2B buyers want a seamless end-to-end experience, from a simplified sales discussion to an all-inclusive customer dashboard, to easy access to support.

B2B executives can help ease the transition to a modern mindset by showing their organizations that CRM and CPQ aren’t just sales tools — they help support the goals of every team member. Yes, sales teams utilize the analytics that result from CX software, but customer data also provides development teams with feedback that is crucial to shaping new solutions. It also allows maintenance departments to schedule and predict both preventive and recurring maintenance more accurately.

Shifting your B2B company’s culture to a customer centric and data-driven mindset helps team members realize that technology will enhance their roles, not hinder or replace them.

Data Silos

How can B2B organizations avoid data silos as they implement new technology?

Data silos are a widespread issue. Nearly all organizations engaged in some kind of computing struggle with them. In B2B organizations, users create a silo when they treat CRM and CPQ as specialized tools and apply them only to the sales side of the business.

While it can be smart to test new tech solutions in a small use case, such as the sales department, they eventually require broader adoption to achieve real results and affect your holistic CX.

Deploying CRM and CPQ organization-wide helps demonstrate the value of tech to stakeholders and potentially skeptical members of your organization. Data silos ultimately hinder your progress. If employees can point to negative effects of new tech, you’ll have a harder time gaining company-wide buy-in.

To help make your case, highlight the tools’ ability to sync up visualization, CAD automation and pricing mechanisms, thus keeping customers, engineers, manufacturers and sales teams on the same page. Every team member can recognize the value of this communication.

Customer Interactions

How does improving CX affect sales teams’ customer interactions?

Digitizing your B2B organization to bolster CX will have a trickle-down effect in your organization, improving the way sales representatives interact with customers face-to-face. The universal accessibility of information allows sellers to present buyers with a slate of product options and provide instant answers about current stock, customization and turnaround time.

The value added by arming your sales team with tools like CRM and CPQ goes far beyond the research customers can do on their own. It’s a critical part of maintaining seller relevance. Currently, less than a quarter (23 percent) of B2B buyers name vendor salespeople as a top-three resource for solving business problems. CRM and CPQ increase seller relevancy by placing the solution in sellers’ hands.

When B2B leadership becomes involved in the CX improvement process, it empowers the entire team to think big. Mid-level managers tend to be a risk-averse group, but when the C-suite leads by example and sets the tone, they understand that ideas and change are not only necessary, but also welcomed with open arms.

When your B2B organization is unified on strategy and goals, the positive effects will be reflected in your customer experience.

US Backs Off Huawei Export Ban for 90 Days

The US$11 billion export component of the business American companies do with Chinese electronics giant Huawei is safe for at least 90 days. The U.S. Commerce Department’s Bureau of Industry and Security on Monday announced an extension of the Temporary General License for Huawei and its non-U.S. affiliates so they can continue to buy goods from American companies.

The license is needed because the Commerce Department has placed Huawei and its affiliates on the Entity List, a blacklist for companies the department sees as engaged in activities that are contrary to U.S. national security or foreign policy interests.

“As we continue to urge consumers to transition away from Huawei’s products, we recognize that more time is necessary to prevent any disruption,” Commerce Secretary Wilbur Ross said. “Simultaneously, we are constantly working at the department to ensure that any exports to Huawei and its affiliates do not violate the terms of the Entity Listing or Temporary General License.”

‘Politically Motivated’

The extension of the Temporary General License does not change the fact that Huawei has been treated unjustly, the company said in response to the move.

The decision won’t have a substantial impact on its business, Huawei maintained, and it intends to continue to focus on providing the best possible products and services to its customers.

Huawei also protested the Commerce Department’s addition of 46 more of its affiliates to the Entity List of banned companies, contending the action was politically motivated and had nothing to do with U.S. national security.

The Commerce Department’s actions violate free market principles and are contrary to the interests of all of the parties involved, Huawei contended.

The company called on the U.S. government to end what it termed “unjust treatment” by removing Huawei from the Entity List.

Crying Wolf

Huawei isn’t alone in doubting the sincerity of the administration’s rationale for the ban.

“No one has found any evidence that Huawei’s a threat to national security,” said Stphane Tral, a telecommunications technology fellow at IHS Markit, a research, analysis, and advisory firm headquartered in London.

“All vendors are involved in the same standards bodies, are treated the same, and go through thorough evaluation conducted by various agencies,” he told the E-Commerce Times.

Nevertheless, the U.S. has reason to be concerned about doing business with Chinese tech companies, according to Jack E. Gold, principal analyst at J.Gold Associates, an IT advisory company in Northborough, Massachusetts.

“China has done things that they shouldn’t have in the past,” he told the E-Commerce Times. “No doubt they will continue to do so, but if there’s a risk, we have to show it.”

There’s also a danger in playing fast and loose with the term “national security threat.”

“If everything is a national security threat, then when you really have one — after using it 15 times and backing off — then is anyone really going to believe you?” Gold asked.

Prelude to a Tough Deal

There seems to be confusion in the public about the export and import bans on Huawei, suggested Robert D. Atkinson, president of the Information Technology and Innovation Foundation, a public policy think tank based in Washington, D.C.

When President Trump lifted an export ban on Huawei in June, opponents argued that if Huawei equipment introduced systemic cybersecurity vulnerabilities into the United States telecom and Internet system, then an import ban was not enough. The administration also should try to cripple Huawei with export bans.

“If Huawei devices are inherently insecure, especially with regard to Chinese Communist Party spying, the import ban is fully sufficient to address the issue,” Atkinson maintained at the time.

“The export ban was never about cybersecurity,” he pointed out. “It was always about inflicting pain on the Chinese economy so that China might make the kind of deal that would lead them to at least partially act as they should as a WTO member.”

That behavior should include an end to forced tech transfer, reduction in joint venture requirements, and massive reduction in unfair and distorting industrial subsidies, Atkinson said.

“The key question with regard to lifting the ban is whether this is done in the service of a tough, fully enforceable trade agreement — the kind U.S. Trade Representative Robert Lighthizer rightly insists on,” he asserted. “If it is, then the president is right to at least temporarily lift the Huawei export ban.”

Future Shock

Rather than bolster national security, a Huawei ban could have the opposite effect.

The ban would prevent the company from providing proper security and operating system updates to Android users with Huawei phones, according to a June Financial Times report citing unnamed Google officials.

It also would force Huawei to develop its own mobile operating system, which likely would be less secure than Android.

Huawei users would lose access to Google services — services like “Google Play Protect,” which automatically scans for malware, viruses and security threats.

“By doing this to Huawei and others, it’s going to force them more quickly than otherwise to develop their own stuff,” observed Gold. “Three, four, five years down the road, they’ll have everything they need, and they won’t need us anymore — and they’ll be banning our stuff.”

46 Added to Blacklist

In conjunction with extending the Temporary General License, the Bureau of Industry and Security identified 46 additional Huawei affiliates to be blacklisted. Since May, more than 100 persons or organizations have been added to the Entity List in connection with Huawei.

The additions to the blacklist could be a bargaining chip for the administration, said Charles King, principal analyst at Pund-IT, a technology advisory firm in Hayward, California.

“If the 90-day extension is the ‘carrot’ in this scenario, adding 46 additional Huawei subsidiaries to the ban list is the ‘stick’ that the Trump administration is using to show that it still means business,” he told the E-Commerce Times.

“Practically speaking, it adds additional pressure to what Huawei is already experiencing,” King said, “but it’s hard to gauge what the long-term effects will be.”

Cerebras Debuts Big Chip to Speed Up AI Processes

Startup chip developer Cerebras on Monday announced a breakthrough in high-speed processor design that will hasten the development of artificial intelligence technologies.

Cerebras unveiled the largest computer processing chip ever built. The new chip, dubbed “Wafer-Scale Engine” (WSE) — pronounced “wise” — is the heartbeat of the company’s deep learning machine built to power AI systems.

WSE reverses a chip industry trend of packing more computing power into smaller form-factor chips. Its massive size measures eight and a half inches on each side. By comparison, most chips fit on the tip of your finger and are no bigger than a centimeter per side.

The new chip’s surface contains 400,000 little computers, known as “cores,” with 1.2 trillion transistors. The largest graphics processing unit (GPU) is 815 mm2 and has 21.1 billion transistors.


Cerebras Wafer-Scale Engine

The Cerebras Wafer-Scale Engine, the largest chip ever built, is shown here alongside the largest graphics processing unit.

The chip already is in use by some customers, and the company is taking orders, a Cerebras spokesperson said in comments provided to TechNewsWorld by company rep Kim Ziesemer.

“Chip size is profoundly important in AI, as big chips process information more quickly, producing answers in less time,” the spokesperson noted. The new chip technology took Cerebras three years to develop.

Bigger Is Better to Train AI

Reducing neural networks’ time to insight, or training time, allows researchers to test more ideas, use more data and solve new problems. Google, Facebook, OpenAI, Tencent, Baidu and many others have argued that the fundamental limitation of today’s AI is that it takes too long to train models, the Cerebras spokesperson explained, noting that “reducing training time thus removes a major bottleneck to industry-wide progress.”

Accelerating training using WSE technology enables researchers to train thousands of models in the time it previously took to train a single model. Moreover, WSE enables new and different models.

Those benefits result from the very large universe of trainable algorithms. The subset that works on GPUs is very small. WSE enables the exploration of new and different algorithms.

Training existing models at a fraction of the time and training new models to do previously impossible tasks will change the inference stage of artificial intelligence profoundly, the Cerebras spokesperson said.

Understanding Terminology

To put the anticipated advanced outcomes into perspective, it is essential to understand three concepts about neural networks:

For example, you first must teach an algorithm what animals look like. This is training. Then you can show it a picture, and it can recognize a hyena. That is inference.

Enabling vastly faster training and new and improved models forever changes inference. Researchers will be able to pack more inference into smaller compute and enable more power-efficient compute to do exceptional inference.

This process is particularly important since most inference is done on machines that use batteries or that are in some other way power-constrained. So better training and new models enable more effective inference to be delivered from phones, GoPros, watches, cameras, cars, security cameras/CCTV, farm equipment, manufacturing equipment, personal digital assistants, hearing aids, water purifiers, and thousands of other devices, according to Cerebras.

The Cerebras Wafer Scale Engine is no doubt a huge feat for the advancement of artificial intelligence technology, noted Chris Jann, CEO of Medicus IT].

“This is a strong indicator that we are committed to the advancement of artificial intelligence — and, as such, AI’s presence will continue to increase in our lives,” he told TechNewsWorld. “I would expect this industry to continue to grow at an exponential rate as every new AI development continues to increase its demand.”

WSE Size Matters

Cerebras’ chip is 57 times the size of the leading chip from Nvidia, the “V100,” which dominates today’s AI. The new chip has more memory circuits than any other chip: 18 gigabytes, which is 3,000 times as much as the Nvidia part, according to Cerebras.

Chip companies long have sought a breakthrough in building a single chip the size of a silicon wafer. Cerebras appears to be the first to succeed with a commercially viable product.

Cerebras received about US$200 million from prominent venture capitalists to seed that accomplishment.

The new chip will spur the reinvention of artificial intelligence, suggested Cerebras CEO Andrew Feldman. It provides the parallel-processing speed that Google and others will need to build neural networks of unprecedented size.

It is hard to say just what kind of impact a company like Cerebras or its chips will have over the long term, said Charles King, principal analyst at Pund-IT.

“That’s partly because their technology is essentially new — meaning that they have to find willing partners and developers, let alone customers to sign on for the ride,” he told TechNewsWorld.

AI’s Rapid Expansion

Still, the cloud AI chipset market has been expanding rapidly, and the industry is seeing the emergence of a wide range of use cases powered by various AI models, according to Lian Jye Su, principal analyst at ABI Research.

“To address the diversity in use cases, many developers and end-users need to identify their own balance of the cost of infrastructure, power budge, chipset flexibility and scalability, as well as developer ecosystem,” he told TechNewsWorld.

In many cases, developers and end users adopt a hybrid approach in determining the right portfolio of cloud AI chipsets. Cerebras WSE is well-positioned to serve that segment, Su noted.

What WSE Offers

The new Cerebras technology addresses the two main challenges in deep learning workloads: computational power and data transmission. Its large silicon size provides more chip memory and processing cores, while its proprietary data communication fabric accelerates data transmission, explained Su.

With WSE, Cerebras Systems can focus on ecosystem building via its Cerebras Software Stack and be a key player in the cloud AI chipset industry, noted Su.

The AI process involves the following:

The problem the larger WSE chip solves is computers with multiple chips slowing down when sending data between the chips over the slower wires linking them on a circuit board.

The wafers were produced in partnership with Taiwan Semiconductor Manufacturing, the world’s largest chip manufacturer, but Cerebras has exclusive rights to the intellectual property that makes the process possible.

Available Now But …

Cerebras will not sell the chip on its own. Instead, the company will package it as part of a computer appliance Cerebras designed.

A complex system of water-cooling — an irrigation network — is necessary to counteract the extreme heat the new chip generates running at 15 kilowatts of power.

The Cerebras computer will be 150 times as powerful as a server with multiple Nvidia chips, at a fraction of the power consumption and a fraction of the physical space required in a server rack, Feldman said. That will make neural training tasks that cost tens of thousands of dollars to run in cloud computing facilities an order of magnitude less costly.

Avoid a Black Friday, Cyber Monday Disaster With Intelligent Testing

Many online businesses rely on Black Friday and Cyber Monday to drive their profit margins. During this four-day period, retailers will see traffic on their site skyrocket.

How can retailers make sure their sites are robust and won’t fail during this critical period? The answer lies in the application of intelligent testing.

Black Friday traditionally has been the day when retailers finally break even for the year. “Black” in this case refers to accounts finally going into the black. The rise of online commerce has driven Black Friday to new heights. Now the sales phenomenon lasts over the whole weekend and into Cyber Monday.

Over the five days from Thanksgiving to Cyber Monday 2018, 165 million shoppers spent more than US$300 each, on average.

Most online retailers will see a massive surge in traffic over the Black Friday weekend. In fact they will see a double whammy. Not only do more people visit — they visit repeatedly in their search for the best deals. As a result, retailers’ backend services are placed under enormous strain.

A failure during this period would be devastating, bringing bad headlines and loss of revenue, and probably losing valuable future custom. So, how do you avoid these pitfalls? The answer is to ensure your site is completely bombproof and can handle the surge in load without a problem.

Stress Testing

Stress testing refers to the process of adding load to your website until it fails, or until the performance drops below an acceptable level.

Typically, there are two types of stress testing. In the first, you check that your site can handle the expected peak traffic load. In the second, you steadily increase the load to try and push your site to fail. This is important, as you need to check that it fails gracefully. Traditionally, this sort of testing has been done in a very static manner, but as we will see, this isn’t very realistic.

API-Based Stress Testing

The earliest form of stress testing involved creating a script to repeatedly call your API. The API, or application program interface, is how a user’s client (browser or app) connects with your backend server. You can simulate users by calling this direct using command line tools like cURL or using special tools like SoapUI or Artillery.

The idea is to place so much load on your back end that it fails. This approach has the advantage of simplicity, although it can be challenging to write the script. Each session will need its own API key, so you will need a script with enough smarts to handle all the keys and sessions.

However, there are three big drawbacks to this approach:

  1. Modern Web applications rely on dozens of interlinked APIs. This approach can’t test all these interactions properly.
  2. All sessions are coming from the same physical (and logical) source. This means that your load balancers will not be doing their job properly.
  3. Real users don’t interact in a predictable manner. Modeling this randomness is extremely hard in a test script.

API testing is still useful, but typically only for verifying the behavior of the APIs.

The Importance of Realism

Once upon a time, a website was a simple beast. It typically used a LAMP stack with a Linux server, Apache webserver, MySQL database and PHP front end. The services all ran on a single server, possibly replicated for handling failures. The problem is, that model doesn’t scale. If you get a flash crowd, Apache is quickly overwhelmed, and users will see an error page.

Nowadays, sites are far more complex. Typically, they run from multiple locations (e.g. East Coast and West Coast). Sessions are shared between sites using a load balancer. This ensures all your sites get equal load using various heuristics to assign the load, such as source IP address.

Many sites are now containerized. Rather than a single server, the application is built up from a set of containers, each providing one of the services. These containers usually are able to scale up to respond to increased demand. If all your flows come from the same location, the load balancer will struggle to work properly.

Session-Based Testing

Tools like LoadNinja and WebLOAD are designed to provide more intelligent testing based on complete sessions. When they access a website, users create a user session. Modern websites are designed so that these sessions are agnostic to the actual connection. For example, a user who moves from a WiFi hotspot to cellular data won’t experience a dropped session. The geekier among you will know that “session” is layer 5 of the OSI model. Testing at this layer is far better than API testing, since it ensures all APIs are called in the correct order.

Generally, these tools need you to define a handful of standard user interactions or user journeys — for instance, a login flow, a product search, and a purchase. Often the tool records these user journeys. In other cases, you may have to define the test manually, much like you do when using Selenium for UI testing.

Having defined the user journeys, you then can use them to run stress tests. These tests are definitely better than the API testing approach. They are more realistic, especially if you define your scenarios well. However, they still have the major drawback that they are running from a single location. They also suffer from the issues that impact all script-based testing — namely, selector changes.

The Importance of Selectors

Ever since Selenium showed the way, script-based testing tools have used JavaScript selectors to identify UI elements in the system under test. Elements include buttons, images, fields in forms, and menu entries. For load testing tools, these elements are used to create simple scenarios that test the system in a predictable fashion. For instance, find and click the login button, enter valid login details, then submit.

The problem is, JavaScript selectors are not robust. Each time you change your CSS or UI layout, all the selectors will change. Even rendering the UI at a different resolution can trigger changes, as can using a different browser. This means that you need to update your scripts constantly. Tools like WebLoad attempt to help by ignoring most of the elements on the page, but you will still have issues if the layout changes.

Intelligent Testing

Recent advances in artificial intelligence (AI) have revolutionized testing. Tools such as Mabl, SmartBear and Functionize have begun applying machine learning and other techniques to create intelligent testing tools.

The best of these tools employ intelligent test agents to replicate the behavior of skilled manual testers — for instance, providing virtually maintenance-free testing, and creating working tests direct from English test plans.

Typically, these tools use intelligence to identify the correct selectors, allowing them to be robust to most UI changes. It is even possible to create tests by analyzing real user interactions to spot new user flows. Rather than just test simple user journeys, intelligent test agents can create extremely rich and realistic user journeys that take into account how a real user interacts with your website.

Intelligent Selectors

AI allows you to build complex selectors for elements on a Web page. These selectors combine many attributes — such as the type of element, where it is in relation to other elements, what the element is called, CSS selectors, even complex XPaths.

Each time a test is run, the intelligent test agent learns more about every element on the page. This means that its tests are robust to CSS and layout changes — for instance, if the Buy Now button moves to the top of the page and is colored green to make it more prominent.

Complex User Journeys

Test scripts often use simplified user journeys. This is because each script takes days to create and days more to debug. Intelligent test tools support the creation of richer user journeys.

They typically fall into two types: intelligent test recorders and natural language processing systems. The first records users as they interact with the website, using AI to cope with things like unnecessary clicks, clicks that miss the center of an element, etc. Intelligent test agents use NLP to take plain English test plans and use these as a set of instructions to the test system.

Cloud-Based Testing

AI requires significant computing resources, and thus most intelligent test tools run as cloud-based services. Each test is typically run from a single virtual server. These virtual servers may exist in multiple geographic locations — for instance, AWS offers seven locations in the U.S. and a further 15 worldwide. This means each test looks like a unique user to your load balancer.

Intelligent Stress Testing

Intelligent test agents combine realistic user journeys with testing from multiple locations, giving you intelligent stress testing. Each cloud location starts a series of tests, ramping up the load steadily. As each test completes, a new test is started. This takes into account the different duration of each test (allowing for network delay, server delay, etc.)

This means you can generate tens of thousands of sessions that look and behave exactly like real users. Better still, you can record session-by-session exactly how your site responds. This allows you to see which pages are likely to cause problems, and gives you detailed insights into how your site performs under load.

This approach addresses all the problems with both API and session-based test tools. The test sessions look and behave exactly like real users, so they will generate the correct sequence of API calls. Your load balancers and infrastructure will behave as they should, because each session looks unique.

Finally, the system is intelligent, so it won’t try to call a new API before a page is properly loaded. This is a marked contrast to other approaches where you tend to use just a fixed delay before you start the next action.

Stress testing is essential for any e-commerce site, especially in the run-up to Black Friday and Cyber Monday. Traditional approaches, such as API and session-based testing help when you have a monolithic infrastructure, but modern websites are far more complex and deserve better testing.

Intelligent test agents offer stress testing that is far more accurate and effective, allowing you to be confident in how your site will behave under realistic conditions. It also can give you peace of mind that any failure will be handled gracefully.

G2’s Revelations

In a world where information is expected to be free, G2 just made its research reports available to the public gratis. Sure, you have to be a registered user, but how hard is that?

From what I know so far, these are interesting reports, but they may not be the last word because they’re primarily high-level overviews and indices based on G2’s crowdsourcing research approach.

Crowdsourcing is fine as long as we all know the rules — like one person, one vote. I don’t know the rules, so I will take it all on faith and trust in others until something makes me rethink.

Context, Please

I am most interested in CRM, naturally, so I went to the CRM report where I had what might be an aha moment. The list is extensive, but many of the names on the list are only passing acquaintances. Only Paul Greenberg can know so many emerging CRM companies!

At any rate, the CRM Usability Index offers some interesting insights. The scores are compiled from indices that track ease of administration, ease of use, requirements met, and other factors.

Presumably the survey takers answer questions from each column, and G2’s algorithms churn out numbers that, when added together, give a combined score on a 0 to 10 continuum.

I get it — but this doesn’t give me context. I don’t know, for instance, if Less Annoying CRM clocking in at the pole position (9.54) has the same complement of features and functions as Freshsales (8.95), or bpm’online (8.82) or how they’re better than market leaders like Salesforce (8.43) and Zoho (8.30).

It’s hard to have really high numbers when millions of users get a say. So I’m thinking that the 8.5 range is pretty good.

Data vs. Information

A few years ago I did a back of the envelope study by searching on a COMPANY NAME plus the word “sucks.” As I recall, bigger companies had more to suck about and did, which is what you’d expect for outfits with multiple product lines.

Ultimately this list gives you about as much as Gartner’s Magic Quadrant. If you’re a vendor you get bragging rights, and if you’re a buyer seeking information, you get a list of companies to investigate if you’re in the market and, sadly, a list of companies to avoid if you’re reading from bottom up.

Since success with CRM is all about how well any chosen system works within the context of your organization, you can use this data as a first step — but it doesn’t relieve you of doing your homework.

Also, I’m fairly certain that some of the listed companies don’t support a full CRM lifecycle including sales, service, marketing, commerce, analytics, and all the rest.

Having G2’s data to work with is highly useful, but as we know from artificial intelligence, the data is the thing that drives information generation, and it’s information we crave. That last step happens in the mind of evaluators and acquisition committees.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

T-Mobile Merger Delay Keeps Sprint and Dish on Edge

The T-Mobile-Sprint merger looked like it was pretty much a done deal a couple of weeks ago when the Department of Justice gave its approval. However, it now looks like there won’t be a final answer until 2020. This will go down in history as one of the longest corporate merger attempts.

The delay will be the hardest on Sprint and Dish Network. Even though T-Mobile looks stronger, it needs the deal, too, as it transitions into 5G.

The good news for T-Mobile is that a recent earnings report shows continuing growth — but that also means regulators may not see this merger as necessary. However, as strong as the company is today, it simply does not have what it needs to succeed in a 5G world. T-Mobile desperately needs more wireless spectrum — something it can get through a merger with Sprint.

Sprint’s latest earnings show a company that is weaker than ever. It needs the merger with T-Mobile for survival. So, the delay has cast a dire shadow over its future prospects. Sprint has plenty of spectrum, but it lacks the marketing magic to be successful in wireless.

Time Running Out

Sprint needs to be rescued — and quickly. If T-Mobile can’t do it, perhaps a cable television company — Comcast, Charter, or Altice — could acquire Sprint and then be able to offer wireless services on a network it owned.

Dish Network needs to enter wireless as quickly as possible. In fact, it needed to enter wireless years ago. Perhaps Charlie Ergen was waiting for the right opportunity. This T-Mobile-Sprint merger could be that opportunity if it ever gets done.

One way or another, Dish Network needs to move into wireless. One, offering wireless service could help slow its pay-TV customer losses. It might use the service the way Comcast uses Xfinity Mobile, or Charter uses Spectrum Mobile. Two, if it doesn’t act, it risks losing the mobile spectrum it acquired years ago. Then Dish would be in even bigger trouble.

Dish Network’s Wireless Options

I have many questions with regard to Dish Network and wireless. One is whether it will enter wireless as a real competitor or just create a more valuable asset to sell.

Another is, if it moves into wireless, will it be an offensive or a defensive competitor? Will it be aggressive like AT&T Mobility, Verizon Wireless, T-Mobile, and Sprint, or passive like Xfinity Mobile, Spectrum Mobile, and Altice Mobile?

Perhaps Dish has something else in mind. Maybe it will create a wireless provider for the cable television industry to use. That could make sense if Sprint’s assets are up to the task.

Then again, Xfinity Mobile and Spectrum Mobile already resell Verizon Wireless, a much larger and stronger competitor. Why would they consider a lesser provider? Altice Mobile will resell Sprint. So, as unlikely as this sounds, it remains a possibility.

New Questions About the Deal

There are many new questions surrounding this merger now that Dish is in the mix. That may be one of the reasons it is dragging on without an answer.

This merger may be on again, off again, because the forces pushing back are not giving up. While I still like the idea, the water is getting muddy again. Now, we have to wait until next year for the conclusion of this merger attempt, one way or the other.

Who knows whether this deal will finally be approved? While I hope so — for the sake of T-Mobile, Sprint, and Dish Network — it simply is not a sure thing. What roles will Comcast Xfinity Mobile, Charter Spectrum Mobile, and Altice Mobile play? Who knows at this stage? Stay tuned.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

Faulty Driver Coding Exposes Microsoft Windows to Malware Risks

Numerous driver design flaws by 20 different hardware vendors expose Microsoft Windows users to widespread security compromises that can cause persistent malware attacks.

A report titled “Screwed Drivers,” which Eclypsium security researchers presented at DEF CON last weekend, urges Microsoft to support solutions to better protect against this class of vulnerabilities.

Microsoft should blacklist known bad drivers, it recommends.

The insecure drivers problem is widespread, Eclypsium researchers found, with more than 40 drivers from at least 20 different vendors threatening the long-term security of the Windows operating system.

The design flaws exist in drivers from every major BIOS vendor, including hardware vendors Asus, Toshiba, Nvidia and Huawei, according to the report.

The research team discovered the coding issues and their broader impacts while pursuing an ongoing hardware and firmware security study involving how attackers can abuse insecure software drivers in devices.

“Since our area of main focus is hardware and firmware security, we naturally gravitated into looking at Windows firmware update tools,” said Mickey Shkatov, principal researcher at Eclypsium.

“Once we started the process of exploring the drivers these tools used we kept finding more and more of these issues,” he told the E-Commerce Times.

The driver design flaws allow attackers to escalate user privilege so they can access the OS kernel mode. That escalation allows the attacker to use the driver as a proxy to gain highly privileged access to the hardware resources, according to the report. It opens read and write access to processor and chipset I/O space, model specific registers (MSR), control registers (CR), debug registers (DR), physical memory and kernel virtual memory.

“Microsoft has a strong commitment to security and a demonstrated track record of investigating and proactively updating impacted devices as soon as possible. For the best protection, we recommend using Windows 10 and the Microsoft Edge browser,” a Microsoft spokesperson said in comments provided to the E-Commerce Times by company rep Rachel Tougher.

Measuring Caution

Attackers would first have to compromise a computer in order to exploit vulnerable drivers, according to Microsoft.

However, the driver design flaws may make the situation more severe, Eclypsium’s report suggests. They actually could make it easier to compromise a computer.

For instance, any malware running in the user space could scan for a vulnerable driver on the victim machine. It then could use it as a way to gain full control over the system and potentially the underlying firmware, according to the report.

If a vulnerable driver is not already on a system, administrator privilege would be required to install a vulnerable driver, the researchers concede. Still, drivers that provide access to system BIOS or system components to assist with updating firmware, running diagnostics, or customizing options on the component can allow attackers to use those tools to escalate privileges and persist invisibly on the host.

To help mitigate this vulnerability, Windows users should apply Windows Defender Application Control to block known vulnerable software and drivers, according to Microsoft.

Customers can further protect themselves by turning on memory integrity for capable devices, Microsoft also suggested.

Probably Low-to-Moderate Risk

Security firms stimulate sales opportunities based on vulnerabilities. Reports such as the Eclypsium disclosures are sales vehicles, contended Rob Enderle, principal analyst at the Enderle Group, and it is not unusual to see the results overstate the problems.

“In this instance, they are highlighting vulnerable drivers, which could allow someone to escalate privileges and take over a system. Generally, however, the attacker would have to come in through the compromised device, and that means they’d have to have physical access to the system and, with access, there are a lot of things you can do to compromise a PC,” Enderle told the E-Commerce Times.

The possibility of the user getting tricked into installing malware also exists. That would take advantage of this driver vulnerability, but the attacker would need to know the vulnerability was there first to make this work, he noted.

“Given the hostile environment we are in and the fact we have state-level attackers, any vulnerability is a concern,” Enderle cautioned. “However, because the attack vector is convoluted, and an effective attack requires knowledge of the PC, the actual risk is low to moderate.”

It is certainly worth watching and making sure driver updates both address these vulnerabilities and are applied in a timely way, he added.

Widespread Impact

The driver design flows apply to all modern versions of Microsoft Windows. Currently, no universal mechanism exists to keep a Windows machine from loading one of these known bad drivers, according to the report.

Implementing group policies and other features specific to Windows Pro, Windows Enterprise and Windows Server may offer some protection to a subset of users. Once installed, these drivers can reside on a device for long periods of time unless specifically updated or uninstalled, the researchers said.

Its not just the drivers already installed on a system that can pose a risk. Malware can add drivers to perform privilege escalation and gain direct access to the hardware, the researchers cautioned.

The drivers in question are not rogue or unsanctioned, they pointed out. All the drivers come from trusted third-party vendors, signed by valid Certificate Authorities and certified by Microsoft.

Both Microsoft and the third-party vendors will need to be more vigilant with these types of vulnerabilities going forward, according to the report.

Signing Software Not Always Reliable

Code signing certificates are used to sign applications, drivers and software digitally. The process allows end users to verify the authenticity of the publisher, according to Chris Hickman, chief security officer at Keyfactor, but there is risk involved in fully trusting signed software.

“Opportunistic cyberattackers can compromise vulnerable certificates and keys across software producers, often planting malware that detonates once a firmware or software update is installed on a user’s system. Therein lies the greatest security risk,” he told the E-Commerce Times.

Eclypsium’s discovery that design flaws in software drivers include numerous hardware makers and software partners drives home the threat businesses and consumer software users face, Hickman said. That attack vector is like this spring’s Asus hack.

“Attackers can exploit code and certificates to plant and deploy malware when businesses run standard — and usually trusted — updates,” he noted.

Code signing is no guarantee that malware can not be introduced into software. Other steps must be taken prior to signing the code, such as code testing and vulnerability scanning, Hickman explained.

Once the code is signed, it will be installed as it was signed, regardless of the contents, so long as the code signing certificate is from trusted source. Hence security and care and control of code signing certificates should be as important to DevOps as the other forms of ensuring legitimate code is produced, he said.

Response and Fixes

All of the impacted vendors were notified more than 90 days before Eclypsium scheduled the vulnerabilities disclosure, according to Shkatov.

Intel and Huawei notified Eclypsium that they publicly released advisories and fixes. Phoenix and Insyde do not directly release fixes to end users, but have released fixes to their OEM customers for eventual distribution to end users.

“We’ve been told of fixes that will be released by two more vendors, but we don’t have a specific timeline yet,” said Shkatov. “Eight vendors acknowledged receipt of our advisory, but we haven’t heard if patches will be released or any timeline for those. Five vendors did not respond at all.”

28M Records Exposed in Biometric Security Data Breach

Researchers associated with vpnMentor, which provides virtual private network reviews, on Wednesday reported a data breach involving nearly 28 million records in a BioStar 2 biometric security database belonging to Suprema.

“BioStar 2’s database was left open, unprotected and unencrypted,” vpnMentor said in an email provided to TechNewsWorld by a company staffer who identified himself as “Guy.”

“After we reached out to them, they were able to close the leak,” vpnMentor said.

BioStar 2 is Suprema’s Web-based, open, integrated security platform.

The leak was discovered on Aug. 5 and vpnMentor reached out to Suprema on Aug. 7. The leak was closed Aug. 13.

What Was Taken

The vpnMentor team gained access to client admin panels, dashboards, back-end controls and permissions, which ultimately exposed 23 GB of records:

The team was able to access information from a variety of businesses worldwide:

The data vpnMentor found exposed would have given any criminals who might have acquired it complete access to admin accounts on BioStar 2. That would let the criminals take over high-level accounts with complete user permissions and security clearances; make changes to the security settings network-wide; and create new user accounts, complete with facial recognition and fingerprints, to gain access to secure areas.

The data in question also would allow hackers to hijack user accounts and change the biometric data in them to access restricted areas. They would have access to activity logs, so their activities could be concealed or deleted. The stolen data would enable phishing campaigns targeting high-level individuals, and make phishing easier.

“There’s not much a consumer can do here, since you can’t really change your fingerprints or facial structure,” observed Robert Capps, authentication strategist at NuData Security, a Mastercard company.

However, a data thief would require access to the consumer’s device to commit biometric authentication fraud at that level.

“Data is not free,” noted Colin Bastable, CEO of Lucy Security.

“There is a responsibility that goes with capturing it. If you cannot afford it, don’t keep it,” he told TechNewsWorld.

The Care and Feeding of Passwords

Many of the accounts had simple passwords like “password” and “abcd1234,” vpnMentor pointed out.

“I can’t see any excuse for using such passwords for real-world applications,” Bastable said.

Yet “these are common passwords still used by consumers today,” Capps told TechNewsWorld. “It’s also possible that these are default passwords set when the account was created, but never changed.”

Using simple passwords for any purpose is “an incredibly bad idea,” Capps said. “It’s a best practice to create a complex password that is memorable, or use a password manager to create highly complex passwords that are unique to a single account.”

Best practices and standards for safe and secure password storage “have been available for decades,” he pointed out.

The vpnMentor team easily viewed more complicated passwords used with other accounts in the BioStar 2 database, because they were stored as plain text files instead of securely hashed.

“If [this] is for real, then it is a fundamental failure of security practice,” Bastable said. “It’s not as if encryption is a lost art or horrendously expensive.”

Passwords never should be stored as plain text, Capps cautioned. Even hashing passwords can be a problem if a weak algorithm or short password is used.

“Many weaker hashing algorithms have had ‘rainbow tables’ — precomputed hash results for simple text strings — that allow the hashed password to be mapped back to their clear text format,” he explained. “This allows for simple recovery of some hashed data.”

The Greater Danger

Suprema this spring announced the integration of its BioStar 2 solution with the AEOS access control system from Nedap.

More than 5,700 organizations in 83 countries use AEOS. Those entities include businesses, governments, banks and the UK Metropolitan police.

The integration is so seamless that operators can continue working in AEOS to manage finger enrollment and biometric identities without switching screens. Biometric profiles are stored in BioStar and are synchronized with AEOS constantly. SSL certificates protect the synchronization.

Both Nedap’s and Suprema’s clients deal with an exceptional variety of security requirements.

“This can make project implementation complex in nature. The primary goal for this integration has always been to provide a truly flexible and scalable solution that’s easy to implement and maintain,” observed Ruben Brinkman, alliance manager at Nedcap.

“This points to a major issue. Convenience is often achieved at a high but hidden cost in terms of compromised security,” Bastable said. “When you seamlessly integrate with another technology, you adopt their security practices and hand these on to your customers.”

The first projects incorporating both firms’ technologies are in the pipeline.

“As a whole, biometric verification is still effective and safe,” NuData’s Capps noted. “Individual implementations may be suspect, depending on the sophistication, security acumen and forward-looking designs implemented.”

Biometric Systems and Safety

“Sadly, there is an assumption that security companies which offer [biometric] technologies are in themselves paragons of security virtue,” Lucy Security’s Bastable said.

“Ask the hard questions of their data security. Don’t trust, but do verify, because your own security relies on your third-party suppliers and partners,” he advised.

“Encrypt,” Bastable added. “Use hardware key security. Tokenization. Have a sound policy, test it — and don’t allow superusers who can abuse their access.”

Spotify for Podcasters Hits the Open Road

Spotify on Tuesday launched Spotify for Podcasters following a year-long beta involving more than 100,000 podcasts from 167 countries.

Spotify for Podcasters is a discovery and analytics dashboard designed to let podcasters track performance through data such as episode retention charts, aggregate demographics about listeners, and details on follower growth. Podcast data is updated daily.


SHORT CAPTION TEXT GOES HERE

– click image to enlarge –

Podcasters can use timestamps in their episode description so listeners can start playing the podcasts from precise moments. Timestamps cannot be longer than the episode they point to. They currently are clickable only on mobile devices. Podcasters also can add links to their episode descriptions.

Spotify for Podcasters users can download a .CSV file with their data.

“Data allows a podcaster to better hone their content and attract both advertisers and more listeners,” noted Rob Enderle, principal analyst at the Enderle Group.

“It is critical to building a podcasting business if you know how to use it,” he told the E-Commerce Times. “You can get a better sense of your audience and use that to attract advertisers and refine your content.”

The dashboard is available globally but currently is rendered only in English.

Spotify’s Definitions

There is no industry standard definition for podcast metrics. Here’s how Spotify defines key metrics:

The most comprehensive data on listeners comes from podcast hosts like Simplecast, suggested James Cridland, editor of podcast industry newsletter Podnews.

That’s because Spotify and Apple provide data only on their own app’s users, he told the E-Commerce Times.

Gunning for Top Position

Apple dominates the podcast business, with 63 percent of the market, according to Andreessen Horowitz. Spotify comes in second place with nearly 10 percent.

Spotify claims more than 200 million listeners across more than 75 countries worldwide, and says its podcasts’ reach has nearly doubled since the beginning of this year.

Spotify earlier this year announced the acquisitions of Anchor, which offers a podcast creation app, and podcast content creator Gimlet Media.

Those buys will enable it to “become the leading platform for podcast creators around the world and the leading producer of podcasts, said CEO Daniel Ek.

Over time, more than 20 percent of content on Spotify will be non-music content, he predicted, and Spotify’s goal is to become the world’s No. 1 audio platform.

Spotify, which is available for both iOS and Android, has “beaten Apple in a number of different countries as a way of listening to podcasts,” Podnews’ Cridland noted.

Show Me the Money

Video is roughly a trillion-dollar market, while the music and radio industry is worth about US$100 billion, Spotify’s Ek observed. “Are our eyes really worth 10 times more than our ears? I firmly believe this is not the case.”

Podcasting will lead the way for growth in the audio sector, he said.

Ads on podcasts totaled $479 million last year — 53 percent higher than the $314 million spent in 2017, IAB found. They are expected to top $1 billion in 2021.

Podcast listening, which drove that growth, increased 7 percent in one year, the firm said. More than half of Americans aged 12 and over have listened to podcasts. Further, podcast listeners continue to respond well to ads.

Spotify wants “to ride this wave to revenue,” Enderle remarked.

Podcasting ad revenue lags behind attention, and podcast monetization is in the very early stages and remains disjointed, according to Andreessen Horowitz. Still, it has doubled each year for the past few years, and investments in podcasting companies have shot up. Last year, a record number of venture capitalists put money into such firms.

With more than 450,000 shows in its catalog, Spotify may have a content advantage, which is at the core of listeners’ engagement.

All About Ads

“Spotify can already serve ads to listeners based on what genre of podcasts they listen to, and you can suspect they may do more of that,” Cridland said, “but crucially, Spotify is trying to increase app usage time without increasing their costs — which is why podcasting is so attractive. Spotify has to pay to use music. Podcasts, however, come free.”

That low-cost aspect of podcasting also might appeal to the corporate world.

“The decision maker who may not have time to read your report might want to listen to you talk, or watch, if you have a video,” suggested Michael Jude, program manager at Stratecast/Frost & Sullivan.

In fact, Frost may disseminate its analysts’ reports as podcasts in the future.

Podcasts “apply to any company that wants to communicate with an audience of customers or prospects, or anyone else,” Jude told the E-Commerce Times. “If all you want is the essential information, that’s a good option.”

Companies can “do centralized curation and archive, and send podcasts to people on their smartphone,” Jude said. “You can even do video podcasts this way.”