Tag Archives: technology

Sanity Testing

When there are some minor issues with software and a new build is obtained after fixing the issues then instead of doing complete regression testing a sanity is performed on that build. You can say that sanity testing is a subset of regression testing.

Sanity testing is done after thorough regression testing is over, it is done to make sure that any defect fixes or changes after regression testing does not break the core functionality of the product. It is done towards the end of the product release phase.

Sanity testing follows narrow and deep approach with detailed testing of some limited features.

Sanity testing is like doing some specialized testing which is used to find problems in particular functionality.

Sanity testing is done with an intent to verify that end user requirements are met on not.

Sanity tests are mostly non scripted.

Source: http://www.softwaretestingmentor.com/types-of-testing/sanity-testing/

iPhone Vulnerability: Return of the Lock Screen Bypass

iphone lock screen

Reports yesterday of a lock screen bypass in the iPhone 5 noted that a "similar" bug was found in iOS 4.1 and fixed in 4.2. In both cases, the lock screen, which is only supposed to let you make emergency calls or enter the lock code, allows the user to perform other functions, like make other phone calls. How do these errors resurface after being fixed? In Apple’s case, the problem could be a weakness in their test plans or procedures.

The iPhone lock screen
When an error that was fixed shows up again later it is called a regression error. Regression errors generally are when some change to the program, a new version or software patch, breaks some feature of the program. Security fixes are one type of feature that could be broken.

Controlling regression errors is a matter of proper documentation and testing. Good code documentation should at least give future developers the chance to recognize that changes will affect the feature. But it’s testing that is the key to preventing regressions.

Any well-designed software project has a formal test plan as part of it. As new features and bug fixes are added, test should also be added to the test plan to make sure that new fixes don’t break old features or fixes. In the case of security patches, a test needs to be added to the plan to check for each vulnerability that is fixed.

The real key to making regression testing practical is to automate it. Back around 2007 and 2008, Mozilla had a very bad problem with security patches causing regressions of other security patches. They finally got it under control and attributed their success, in part, to increased automated testing.

Almost any test can be automated, even by simulating user interface actions by hardware through the USB connection to the device. But the lock screen on iOS is a problem for test automation. The lock screen is designed not to allow external hardware to break out of it, lest someone else take your phone and gain control of it. There’s no automated way to test it, so you have to test it manually.

In all likelihood, Apple has some manual tests to perform as well, but it’s easy to see how they would get shrugged off in a hurry or given to some intern who didn’t execute them properly. Expect an angry memo to go around at Apple about this, but deadlines are deadlines and one day the manual testing will again seem like a corner worth cutting.

Source: http://www.informationweek.com/byte/personal-tech/iphone-vulnerability-return-of-the-lock/240148663

Cloud computing 2.0: Where next for business?

Cloud computing has come a long way in the past 12 months. Everywhere we look, the cloud seems to be there – it’s like the industry is a film with the cloud on an overt product placement mission.

At the same time, however, for many, it perhaps feels like cloud hasn’t come far enough – once users get an appetite for technology they want the IT version of the moon on a stick. And, inevitably, they
want it yesterday.

This is putting increased pressure on already pressurised IT departments and means business and tech decision makers have to provide even more leadership and guidance then ever before.
Cloud computing has come a long way in the past 12 months. Everywhere we look, the cloud seems to be there – it’s like the industry is a film with the cloud on an overt product placement mission.
In our first IT Pro report entitled Cloud Computing in 2012, we looked at what cloud is and pondered its potential for business transformation. We went into detail on cloud ‘basics’ such as the difference between public and private clouds, IaaS vs SaaS vs PaaS and looked at where cloud was headed.

This report aims to move the story on and arm IT professionals, managers, directors and C-level executives with greater insight into not just what cloud can do for their business, but the other factors
they must consider.

How, for example, will legacy systems work in this new cloud world? It would be wrong to suggest every business, in every sector, is going to rip and replace what they’ve always had and go 100 per cent cloud.

So how can the two co-exist and what do businesses need to bear in mind and do to get there? Stephen Pritchard takes a look at this very topic.

Security and privacy continue to be major concerns and barriers when it comes to cloud adoption. Can you really trust the cloud in a crisis? Davey Winder tries to answer this question, which is far from simple.

We also run down the storage options available to businesses to help you make the right choice for your organisation.

This report also features cases studies from the Alzheimer’s Society, Cancer Research and Lamborghini so business and technology decision makers can benefit from the experience of others when it comes to the cloud.

All in all, we hope this report provides you with answers to some key questions and leaves you feeling ready to take cloud to the next level in your organisation rather than taking a step back.

Source: http://www.itpro.co.uk/cloud/19177/cloud-computing-20-where-next-business

Dell’s Future: 3 Wild Cards CIOs Should Understand

When Dell’s board of directors recently approved a $24 billion bid led by CEO and founder Michael Dell and private equity firm Silver Lake Partners to take the company private, it marked the largest leveraged buyout since the financial crisis. Another twist to the deal is Microsoft, which will provide $2 billion to support the "long-term success of the PC ecosystem."

While Dell remains a major hardware provider — 70% of its revenue still comes from PCs and related software and peripherals — the company has taken significant steps over the past few years to transform itself from a PC and server vendor into an end-to-end enterprise services provider. Since 2007, Dell has acquired 27 companies in pursuit of this strategy, including application modernization services firms Clerity Solutions and Make Technologies in 2012, managed security services firm SecureWorks in 2011, and Perot Systems in 2009, resulting in a 19.2% year-over-year growth rate in its services and software revenue over the same period.

Still, while it’s received positive reviews for its IT outsourcing (ITO) capabilities from research firms including Gartner and Forrester, Dell has struggled to be perceived as a top-tier ITO provider outside of niche areas like healthcare and government. Even in the managed desktop arena, where Dell should maintain a value proposition rivaled only by HP, it has been unable to coordinate its services and OEM businesses to meaningfully compete with the likes of IBM, TCS and Wipro.

For CIOs considering buying in to Dell’s ITO vision, the transaction (assuming it’s approved by shareholders) will have ramifications in three main areas:

– The benefits and risks of going private. On the positive side, now that every dollar invested doesn’t need to move the near-term earnings needle, Dell will be able to make longer-term investments to grow and enhance its services business without sweating market scrutiny and quarterly earnings pressures. The company can also focus management attention and resources on unifying its string of acquisitions into an integrated software, hardware, and services platform. As Dell’s CFO Brian Gladden recently commented, "Without having the scrutiny that is associated with a publicly traded stock, we can make the necessary investment and stick to plan, [and] in some cases be more aggressive than we can today."

Conversely, the limited buyout will put serious pressure on the company’s balance sheet. That could inhibit its ability to invest to expand and improve its ITO capabilities — with some $17 billion of new debt, Dell will be obliged to satisfy its creditors first and ITO customers second. Such uncertainly is anathema to companies outsourcing their IT infrastructures. When the foundation of the business is on the line, CIOs want stable providers that present little risk of unexpected pivots that might force them to scramble for alternatives. Yet Dell’s new set of financial pressures could result in spin-outs of business lines deemed to be "non-core" to the future, though it is difficult to tell at this point whether legacy hardware businesses or newer ITO businesses would be targeted for divesture.

– The Microsoft investment and its potential impact on future service offerings. Yes, Microsoft’s $2 billion did not buy it board seats or voting rights. But that level of investment is generally not without some strings attached. While Microsoft can bring innovation, improved software and cloud capabilities and resources and knowledge to optimize Dell’s services offerings — especially those that leverage Microsoft products — Dell may find itself under pressure to index more heavily to Microsoft offerings. And that may hinder Dell being perceived as technology-agnostic provider, a la Accenture, and frustrate efforts to gain traction in the ITO space.

– Michael Dell himself. He is certainly putting his proverbial money where his mouth is by contributing his entire 17% ownership interest as part of the privatization bid, along with some $700 million from one of his investment companies. Dell’s vision and energy drove his namesake company from a $1,000 startup operating out of a dorm room to one of the largest PC suppliers in the world, and having a substantial part of his personal fortune on the line makes it likely that energy will be out in full force. However, there’s also a real possibility that Dell’s actions are motivated more by considerations of his legacy or a determination to double down on the OEM business than by a burning desire to build a legitimate Tier 1 enterprise-class ITO provider.

7 Steps To Protect Your Position

Dell’s buyout could end up benefitting customers by bringing a more strategic, enterprise-level focus. But it also raises concerns — we can’t know with any certainty how a privately held Dell will fare in a competitive ITO market, and its true commitment to the ITO business will likely not be fully understood for months to come. For now, CIOs contemplating a relationship with Dell should:

– Think twice about a long-term relationship. Factor in uncertainty when defining term commitments. CIOs contemplating large-scale projects should be especially sensitive to the risk of unforeseen changes of strategic direction (on Dell’s part) and the effect on service delivery. Contract term lengths (original and renewal options alike) should be of appropriate duration to allow a timely exit should the relationship not meet expectations.

– Question its dedication to ITO. Depending on cash flow, Dell may be forced to concentrate on meeting its outstanding debt obligations vs. investing its ITO business, corresponding innovation, complementary acquisitions and new corporate customer relationships. Insist on multi-level formal governance provisions in all agreements, as well as upfront meetings with senior executives to gain comfort in Dell’s commitment to the ITO space in general, and to you as a customer in particular.

– Maximize flexibility while minimizing downside risk. Place a premium on low/no service commitments, low-cost terminations for convenience, step-in rights, technology currency and innovation/productivity requirements. At the same time, uncertainty over future direction and the comparative opaqueness of private company financials place a heightened importance on audit rights, disengagement assistance and divestiture support and termination for change of control.

Source: http://www.informationweek.com/global-cio/interviews/dells-future-3-wild-cards-cios-should-un/240148382

SaaS remains most popular form of cloud computing

Software as a service (SaaS) continues to be the top cloud service that the UK enterprises plan to use in 2013, despite newer cloud services such as datacentre as a service, database as a service or even testing and development as a service.

A majority (55%) of respondents of the TechTarget and Computer Weekly UK IT priorities survey 2013 cited SaaS as the external cloud service that they will use this year.

Infrastructure as a service (IaaS) was the second most popular cloud service, with around 34% of some 400 IT executives surveyed planning to use it.

In comparison, only 12% of respondents said they will use datacentre as a service and only 11% said they will use collaboration as a service.

Other cloud computing services such as testing and development and private cloud design were also cited by less than 20% of the respondents.

In addition, only 16% said they will use security as a cloud service emphasising the cloud computing security risks and concerns.

UK businesses are still very cautious about investing in cloud computing in 2013, with only 30% of IT executives planning to increase their cloud budget for 2013. This compared with 46% investing in software and 43% investing in hardware resources this year.

Among their IT objectives for 2013, 33% said they planned to expand IT to support business growth, 23% planned to automate the business more and 14% planned to maintain service levels with flat budgets.

But despite cloud being touted as the technology that can help businesses automate, support growth and cut management costs, a large majority (71%) of IT executives said they would use on-premise hardware or software deployment models for 2013.

The study also showed that server virtualisation and datacentre consolidation still topped the list of IT priorities for UK enterprises, with only 13% opting for a public cloud infrastructure deployment model.

Biggest cloud concerns
While security and reliability of data remained the top concerns for UK IT professionals when it came to cloud computing, other challenges such as reliability, lack of interoperability and problems of migrating workloads to and from the cloud also ranked high in users’ cloud concern list.

As for cloud service providers, only a slightly higher percentage of respondents (27%) picked external cloud platforms to private cloud platforms (22%).

The study also revealed that following datacentre consolidation as top IT priority, IT professionals preferred to implement policies around BYOD trends over use of cloud services. For example, nearly half (49%) said they are planning policies around allowing users to bring their own smartphones, and another 29% are looking at how to allow employees to use their tablet devices on the corporate network.

But only 12% said they are planning to implement policies around using email services such as Gmail, 13% are planning to allow employees to use Google Docs, and 16% are implementing strategies for employees to use Dropbox – all personal cloud storage services. 

Source: http://www.computerweekly.com/news/2240177830/SaaS-remains-most-popular-form-of-cloud-computing-for-UK-IT

Maximizing the value of cloud-based development and testing environment

Historically, development and testing environments have been built and managed at the project level, and often remain underfunded, under-resourced and underutilized for significant periods of time. The development and testing demand and the IT infrastructure management processes differ in their DNA. Development and testing is unpredictable and has variable demand cycles while the IT managers look at smoother predictable operations, gradual capacity building and higher utilization. Despite being a crucial IT function, the inability to quickly provide the capacity needed by development and testing teams delays the application development life cycle and hampers the delivery of an application quickly and efficiently.

As the pace of change and the level of competition is growing, businesses today need agile IT environment to match the highly dynamic and resource intensive needs of the application development and testing – a business critical function.

According to Gartner, cloud and mobility will drive the worldwide application development market to exceed USD 10 billion in 2013. By leveraging cloud, developers, test engineers, and QA teams can develop and perform extensive scenario testing in shorter cycles. Here’s how:

Cloud provides developers and test engineers with a self-service model for requesting and almost instantly receiving resources from within a pool of secured, shared and scalable infrastructure resources. This capability can shave days or even weeks off of application development project times, speeding time to market. Cloud also enables these teams to build configuration templates and machine snapshots in seconds, run them in parallel, and customize them to meet the needs.

Test engineers can quickly deploy configurations and scale performance on-demand heavy load testing, saving time and operational costs over traditional on-premise development and testing environments.

Benefits of moving development and testing to the cloud include:

• Achieve faster time-to-market and greater flexibility for new products and services.
• Automate approval workflows and reduce the cost of IT infrastructure management.
• Enhance ID & Access control and safeguard data with a private or a hybrid solution.
• Utilize infrastructure capacity efficiently with granular monitoring and management of infrastructure resources.

Considerations for cloud-based development and testing

While development and testing in a cloud-based model addresses the traditional roadblocks of cost, scalability, and lack of process and methodology, it has its own challenges:

Security & Control – Businesses may have applications that need to comply with regulatory and corporate restrictions around security and data privacy for e.g. access control for offshore and sub-contractors or an applicable local law that mandates compliance to data residency and hence restricts usage of a public cloud for data/devices. These may not even move to an off-premise cloud instance for development and testing because of proprietary/legacy systems, as well as intellectual property security considerations. In addition, control & governance mechanism needs to be set for integrating workflows, identity management, usage metering, chargeback, etc. to ensure efficiency and quality.

Interoperability – Businesses may be confronted with issues surrounding legacy systems development & testing on the cloud as connecting to legacy systems from the external cloud may pose interoperability issues. The ability to integrate with existing systems and share data between different platforms may need multi-tier technology architecture.

Performance – As development and testing environment on cloud maybe shared by numerous users, there may be cases where businesses may have to wait for the required bandwidth. Uptime is an important consideration when developing and testing on the cloud to assess the performance characteristics of an application. The IT admins have to ensure that underlying hardware provides adequate performance levels across storage, network and compute in a private cloud, while such tweaking may not be available in a public cloud.

Monitoring – Monitoring of application, which is in a distributed format, one spanning multiple servers, on multi-cloud environment or accessing multiple applications through web services, becomes difficult from a performance, security and availability perspective. A full-featured logging and tracing mechanism for troubleshooting becomes imperative. Measuring the cloud utilization by various teams and business units enables better capacity planning for future.

Management – Servicing and managing development and testing environment has been challenging because of the bursty workloads and the dynamic service requests. The current processes are designed around current IT service delivery models. The processes such as provisioning, procurement, configuration and de-provisioning of the resources are manpower intensive at the transactional levels and automation is limited by the technology. While cloud provides a wide variety of build/integration systems, test harnesses, and development and testing tools, there is still a need to bring all of this together in a turnkey and managed model to reduce the burden of managing development and testing infrastructure on cloud.

Developing for the cloud

– Developing applications to run on cloud is different from developing applications for a traditional or virtualized IT environment. Developers must build applications that consider resource unavailability and is able to recover from such incidences. For e.g. a multi-tier application should be loosely coupled and ready for any other tier failure. The application should be built in a way that allows multiple instances of a component to run concurrently so that in case an instance fails, the components could easily switch to another instance.

Critical Success Factors

A thorough planning and selecting the right technology and cloud service provider must be done in order to maximize the value of the cloud-based development and testing environment.

To understand cloud-ready environments, some key architectural requirements of a cloud-based development and testing environment must be known:

+ What hardware/compute resources will be used and will it be capable of achieving development and testing objective?
+ What resources (wiring and cabling, SAN and storage, rack space etc.) will be needed before any servers or workloads are installed?
+ What networking and data storage capabilities in terms of capacity will be required?
+ What workloads/applications will be placed in the cloud?

Businesses must make sure that the cloud-based development and testing environment is aptly architected for hybrid environment and does not lead to application performance degradation.

Once the development and testing environment in the cloud is established, businesses must take into consideration the following to ensure an agile development and testing life cycle:

+ Template library of ready-to-use VMs, defining server, capacity & storage requirements along with application components, must be created. Such templates allow team members to quickly duplicate environments and streamline provisioning.

+ Services in the cloud should be integrated with the right chargeback/metering processes and tools. This will enable enterprises with the financial thresholds and control of costs in the development and testing cloud.

A comprehensive cloud solution for development and testing provides increased control over projects, speed of deployment, ease of collaboration, and the ability to access environments on demand, enabling efficient and quality application development and testing.

Source: http://www.informationweek.in/cloud_computing/13-02-08/maximizing_the_value_of_cloud-based_development_and_testing_environment.aspx

Mobile app security: Always keep the back door locked

In the 1990s, client-server was king. The processing power of PCs and the increasing speed of networks led to more and more desktop applications, often plugging into backend middleware and corporate data sources. But those applications, and the PCs they ran on, were vulnerable to viruses and other attacks. When applications were poorly designed, they could leave sensitive data exposed.

Today, the mobile app is king. The processing power of smartphones and mobile devices based on Android, iOS, and other mobile operating systems combined with the speed of broadband cellular networks have led to more mobile applications with an old-school plan: plug into backend middleware and corporate data sources.

But these apps and the devices they run on are vulnerable… well, you get the picture. It’s déjà vu with one major difference: while most client-server applications ran within the confines of a LAN or corporate WAN, mobile apps are running outside of the confines of corporate networks and are accessing services across the public Internet. That makes mobile applications potentially huge security vulnerabilities—especially if they aren’t architected properly and configured with proper security and access controls.

Speed (to market) kills

Today we have tools like PhoneGap and Appcellerator’s Titanium platform as well as a host of other development tools for mobile platforms that resemble in many ways the integrated development tools of the client-server era (such as Visual Basic and PowerBuilder). So individual developers and small development teams can easily crank out new mobile apps that tie to Web services, hooking them to backend systems launched on Amazon at high speed.

But unfortunately, they all too often do so without considering security up front, creating the potential for exploitation. While a lot of attention has been paid to security on the device itself, the backend connection is just as, if not more, vulnerable.

If companies are lucky, like Montreal-based SkyTech Communications, those holes merely produce public embarrassment. When a computer science student at a vocational college used a freely downloaded security scanner on SkyTech’s mobile app (which allows students to access their records and register for classes), he found major security flaws in the application. These flaws allowed anyone to gain access to students’ personal information.

Small developers aren’t the only ones who can get caught by their mobile app backends. Take, for example, General Motors’ sudden leap forward with its OnStar Web API. The company was forced to accelerate a public API effort when it discovered an enterprising Chevy Volt owner had reverse-engineered its mobile application API for retrieving vehicle statistics from OnStar’s data centers for personal use. Fortunately, he wasn’t malicious. But he did build a website for other drivers to do the same—which potentially exposed personal data in the process by using those drivers’ OnStar account logins, in violation of GM’s privacy rules. The site now runs on a new, more secure API.

Keeping the client (mostly) dumb

"This sort of thing has been a problem since computers started talking to each other," said Kevin Nickels, the president and CEO of "backend as a service" provider FatFractal. To prevent these sorts of problems—or worse—developers need to address issues like security and access control early on. "Too often, developers try to address these after the fact, and not from the very beginning," Nickels explained.

One of the key elements of security design in mobile applications is making sure that the client—the phone app itself, or the browser app—does very little processing. "The general best practice is to let the code on the device do as little as possible," said Danny Boice, the co-founder and CTO of Speek, a cloud-based conference call service that works through native mobile clients and Web browsers. (Boice is also a former executive in charge of Web and mobile development for the SAT testing company, The College Board.) "There are things on a person’s phone that you can’t control. We put most of the heavy lifting off of the client, because you can control what the application sends and receives."

It’s especially important to handle all data integration with other services on the backend and not on the mobile device, says Nickels. "Ads exposed in an app, for example, could have malicious code. We recommend people do that sort of integration via the backend. That way, things coming from outside the app won’t have any access to any system resources at all."

Dan Kuykendall, Co-CEO and chief technology officer of security testing firm NT Objectives, said the less mobile apps store and process data on the client device, the better. "A lot of developers think, ‘The only traffic that’s going to come in is from my mobile app’," Kuykendall explained. "And they build logic into the mobile client"—building queries to be sent to the backend systems and processing raw data sent back. But requests from the app can easily be "sniffed" by someone who has the application on a device of their own, by malicious software on the device that might monitor outbound traffic, or by someone maliciously monitoring what comes off mobile devices. "You don’t want the app passing SQL statements back to the backend," Kuykendall said. "That’s crazy." But as he says, that’s also all too common.

The most basic bit of hardening required for mobile applications is to encrypt traffic to the backend—at a minimum, by using Secure Socket Layer (SSL) encryption. But SSL by itself isn’t enough because of the nature of how mobile devices connect. Many smartphones will automatically connect to available open Wi-Fi networks they remember, making it relatively easy to get them to connect to a rogue device that can act as an SSL proxy, decrypting and re-encrypting traffic while recording everything that passes through.  While SSL is usually a defense against attacks on browser-based sessions on PCs, some mobile apps are vulnerable because they rely on WebKit to handle SSL. WebKit doesn’t fail by default with bad certificates like those used in "man in middle" (MIM) attacks—it sends an error message to the app that a cert is bad, and lets the code decide what to do about it. In some cases, to get around errors, apps get set to accept any cert, so they’re vulnerable to MIM attacks.

"I can sit in a public place, like the mall, with a Wi-Fi Pineapple and my laptop," Kuykendall said, "and deliver real Internet access with me as a ‘man in middle’, and see the traffic coming from people’s smartphones without them knowing their smartphone is connected to me. And when apps fetch updates, I see that." Since many mobile apps fetch updates without user interaction, "the users aren’t instigating the connection—it just happens." If data pulled from a man-in-the-middle attack doesn’t have additional sorts of controls and protection, it could then be used to attack the backend systems.

Another vulnerability caused by putting too much reliance on the client is that it requires more data to be stored on the client—data that could be exploited. Even ephemeral data (information stored locally to be processed for display or to be sent to the backend and then be disposed of) is vulnerable. "It’s not so easy to get into a running app and steal stuff," Nickels said. "It’s more of an issue with a data cache or on-phone storage, using databases like SQLite. You need to obfuscate that data as best as you can, encrypt it at rest, and store things that are not easy to associate with each other."

Source: http://arstechnica.com/security/2013/02/mobile-app-security-always-keep-the-back-door-locked/

Mobile Usability Testing

There are millions of mobile and tablet applications available in the market today. An assortment of applications ranging from productivity, social, communication, games and entertainment has influenced a whole new generation of mobile and tablet users. Mobile applications are a lifeline for smartphone/tablet users, as they are totally dependent on them for making voice call, messaging, mails, virtual social life and so on. iPhone, for instance has influenced users to connect with the world via apps.

Many applications that do not live up to expectations are subject to rejection. Many are discarded by users due to issues relating to accessibility, purpose, usability and appearance. There are a number of applications that are born out of an idea that holds promise, but fail to deliver. An application fails due to a number of reasons, the foremost being usability. Flopping apps have prompted start-up firms to change their strategy and adopt “User Experience and Usability” as a crucial ingredient in their product development strategy. These firms make sure that a UX expert is an integral part of the project from kick off to go live.

Mobile UX evaluation and usability testing are still evolving. There are no specific techniques or guidelines to be followed, however, there are some best practices that can contribute greatly in identifying and resolving usability issues. A mobile user experience architect typically starts from the requirements and develops an information architecture considering the interface guidelines as usual (assuming a regular practice), then converts it into wireframes which would mostly be low-fidelity and generally sketched on paper. Creating paper prototypes is a best practice and helps the UX consultant identify obvious errors. The prototype is then converted into a wireframe (HTML, flash, pdf) and integrated with the target device to carry out usability testing. Based on user feedback, the functions are rearranged and passed to UI design and coding.

The procedure is very simple. The setup would generally be a mobile stand, clamped web-camera to capture the on device events and another camera with audio to capture user reaction & voice. Identify the purpose of the application and make a list of task that users can carry out. For example, if we are testing an email application then task oriented testing would be on Compose Mail, Delete Mail, Select Multiple & Mark as Unread, etc. The time taken by the user to complete the tasks and the overall experience is then recorded. The test will be carried out among multiple users (targeted audience or general – depending on the application) and decisions are taken based on the ratio of test results such as time taken to accomplish the task, readability, find-ability, accessibility and reduced learning curve.

Source: http://www.qburst.com/blog/2012/10/mobile-usability-testing/

Apple, Oracle and Java 7 Update 11: how to gracefully swallow a little bit of pride

Over the weekend, Oracle released Java 7 Update 11, a zero-day fix for a vulnerability discovered in its Java 7 Update 10 patch, that, according to ZDNet and Oracle, offered “easily-exploitable vulnerability allows successful unauthenticated network attacks via multiple protocols,” and that a “successful attack of this vulnerability can result in unauthorised Operating System takeover, including arbitrary code execution.”

Not a bad thing, even if security researchers is apparently still worried that the bug could linger and cause problems for up to two years.

From the Mac OS X end of this, I imagine Apple’s eating a bit of crown here (the company has always been consistent with its Java updates and prior to the Trojan BackDoor.FlashBack debacle of last spring that exploited a Java vulnerability and affected approximately 600,000 users, has generally had a pretty good record with its security). In this case, it was generally advised that users head over and download Oracle’s Java 7 Update 11 patch and where Apple was concerned, the anticipated patch was nowhere to be found via Software Update…

What this feels like it’s boiling down to is the following: Apple has generally minded its own shop where security has been concerned and done a more-than-respectable job of it. Yes, there have been times where third parties like the mighty Charlie Miller have come in and shown them which security holes needed to be fixed, but Mac users could generally depend on Apple to deliver a series of security updates that took care of the details and let users go on about their business. The advent of Oracle providing a definitive fix for this Java vulnerability is a new thing altogether and it comes as a bit strange, after installing Oracle’s update, to open up my “System Preferences” menu and find a new “Java” preference pane that hooks into Oracle’s updates as opposed to anything Apple might offer through its own offerings.

Still, on the whole, I don’t regard this as an entirely negative thing.

As useful as Java is and has been, it represents an entirely collaborative industry effort with no single person, company or entity controlling it. As such, it doesn’t fall under the responsibilities of any single entity to update and maintain it. And while it is a bit strange to find a collective “Java” updater in my “System Preferences” (shades of Windows XP and later coming to OS X…), I can also see this as Apple backing off a little bit and perhaps admitting it’s part of the larger community responsible for maintaining and updating Java, even if it can’t fly a comprehensive new security updater under its aegis every three to four months that will apparently handle any and all issues, Java-based ones included.

This reliance on others may not be Apple’s style, but it and its attached lessons are applicable to almost any software company on Earth. Sometimes the issue at hand isn’t entirely yours to solve, especially if it wasn’t created in-house and was a collaborative effort to begin with. Sometimes the issue belongs to the industry (and its inherent community) and you’ll need to sit back and accept the communal fixes as well as offer your own ideas as to how best to resolve an issue.

Such a thing doesn’t encapsulate all the pride and independence you might hope for, but it’s part of growing up and being part of larger whole.

Not every battle is yours to fight and if Apple can learn this, so can your company.

Source: http://www.sdtimes.com/blog/post/2013/01/15/Apple-Oracle-and-Java-7-Update-11-how-to-gracefully-swallow-a-little-bit-of-pride.aspx

Whatever happened to the art of software testing?

Over the last year I’ve had the opportunity to attend a number of extremely interesting and mind-expanding conferences focusing on emerging and somewhat disruptive technologies and companies: APIs, mobile, cloud, big data – the works. Coming from a quality background, it has stricken me how little focus these companies give to testing. They talk plenty about continuous integration, agile methodologies, user groups and continuous deployment – but testing? Nope.

Why is that?

First, let me elaborate a little on what I mean (and don’t mean) by "testing." I don’t mean unit tests. I don’t mean BDD or TDD based on some cool-named framework using a custom DSL to describe scenarios. I don’t mean recorded and automated scripts of your website or application. Much of this is being done by many of these companies – which is great, and I’m positive it increases the quality of their products.

What I do mean with “testing” is testers who try to break stuff; who do those things with your software that users weren’t intended to do; who provoke the hardware hosting your application to behave like it usually doesn’t; and who continuously challenge the design and architecture of your products by thinking outside the box and applying this in a methodological and structured way. When it comes to quality, these testers will be your greatest pain and your biggest gain. They take the quality of your products to the next level by doing all that crazy stuff your users do (and don’t do), giving your team the opportunity to fix them first.

So, back to the question: why is it that these oh-so-crucial testers and the art of testing are so absent from these companies and conferences?

I have three possible explanations in my mind:

Strike One: Developers aren’t testers

Developers – I love you – but you aren’t testers. A tester’s mentality and talent when it comes to quality is to find defects and ultimately break stuff. Developers, on the other hand, want to make sure things work. This might sound like a small difference but the implications are huge. The “developer” of a steering wheel will make sure it turns the car left or right when you turn the wheel left or right (at the required rate/degree/whatever). The tester, on the other hand, will jerk the wheel back and forth to bring it out of balance, will submit the wheel to extreme (but plausible) conditions at the north pole and in the Sahara desert, and he might even compare the wheel to the competitors and tell you why theirs is better. Developers confirm, testers overhaul. That’s just how it is. Unfortunately, though, developers are usually at center stage in a development team, and often lack the insight into both the craft of testing and the time it takes to do it right. You need both on your team for your quality to be top-notch – neither can stand in for the other.

Strike Two: Agile Testing = Automated Testing

Don’t get me wrong – agile development can be fantastic when executed correctly, and has surely improved the lives of many a developer/tester/product-owner out there. Unfortunately, though, agile teams (at least in their infancy) often put testing efforts into the hands of developers (see point 1), who often believe that you either need to be able to automate all your tests or that a BDD/TDD specification is a valid substitute for testing. Neither is correct. Using a BDD/TDD specification as a test is just another way of checking that your software performs as required/designed/specified. And, as already argued above, exploratory testing is key to finding those out-of-bounds conditions and usage-scenarios that need to be fixed before users encounter them.

Strike Three: Cheap-skating quality

OK – you’ve convinced your agile team they need to do exploratory testing during their sprints, and your developers have reluctantly agreed that they aren’t testers at heart. So what happens when you approach the management team with a request to hire an expert tester?! Hands in the air if you think they might answer something like:

+ "We have a deadline – we need to release – we’ll invest in development now and testers later."
+ "Don’t our developers have 90% code coverage? Do we really need testers?"
+ "Our users will help us iron out those out-of-bounds issues and quirks. That will be ample feedback for future improvements."
+ <any other “explanation” that is based on the reluctance to spend money on quality>

No one raised their hands? Phew, that’s a relief! Otherwise, given the already stated arguments, this is an obvious and probably the most common mistake. Your story-telling talents will be put to test. Hopefully you can convince management to make the investment.

What to learn from this mini-rant? To put it simply:

+ Understand that testing, just like development, is a craft of its own
+ Cherish your testers and their expertise
+ Invest in quality – your users will love you for it.

Source: http://www.networkworld.com/community/node/82273