Sunday, July 12, 2015

Adopting Agile Practices at the Medical Division of ON Semiconductor

By Fei Min Lorente

We started working in an Agile way last year. We began with classroom training and workshops, then retained the instructor on an as-needed basis for coaching. We also implemented a web-based tool called Jira, not only for tracking issues, but to record our planned features (defined as epics and stories), set tasks for sprints, estimate points, and track our velocity. In all of this, I (the lone technical communicator), have been treated like one of the developers. I’m part of the sprint planning meetings, retrospectives, and daily stand-ups.

We are still trying out new processes and procedures when it comes to developing a product, and although we’ve called our way of working “Agile”, our manager is committed to doing what works, whether it’s Agile or not. Agile evangelists will tell you that Agile isn’t really a methodology with a rigid set of procedures; it’s really a set of values and principles that guide you to make the best decisions about how you work. As a result, no two companies working “Agile” are going to do exactly the same thing. In fact, right now, no two groups are doing the same thing either, so I will describe what two of the teams are doing.

Team A is working on a new product, so the documentation is being developed from scratch. We created an outline at the beginning of the project because we had a good idea of what all the features would eventually be, and we still had to estimate resources and schedule, so I needed an idea of the page count. From this outline, I created a skeleton document in FrameMaker with all the subheadings that I knew of at the time. As the developers add new features, they are documenting them directly in the manual. In fact, they cannot consider their task closed until it is also documented, or a new task is opened to do the documentation (but most of the time the documentation is quicker to do than to create a new issue in Jira and then do it, so they just do it). I then become one of the reviewers for the issue, except I’m reviewing the documentation instead of the code. I’ve been doing the usual checks for grammar, clarity, consistency, conciseness, accuracy (I can also try out the feature), sometimes on a change that is as small as a sentence.

We have still allotted time for a complete edit and technical review of the manual when all the features are done (or mostly done), but we expect that the editing and technical review should take considerably less time than if we weren’t doing that on a continuous basis.

Yes, there has been and will be rework because the features change. Sometimes they are reworked because of feedback from our users or the product manager. But the documentation that is released with the product at the end of every sprint is matches that release, or at most, is one sprint behind.

Team B was working on an update to an existing product. Our tasks were defined by bugs and new features that were entered in Jira. If a new feature required documenting, we had to add a task for it, define the work and estimate it. There were dependencies: we had to know that a feature would be included before I could document it, but it didn’t have to be finished before I could start. Several times, the developer ran into problems, delaying the completion of a feature, but we agreed as a team that I could go ahead and document the way it was going to work. Yes, sometimes I had to adjust the documentation again, but it kept us moving forward and avoided the “do everything in the last two weeks” scenario that usually befalls the technical communicator.

After working Agile for about seven months, I have to say that I prefer it to the old Waterfall method. Mostly I appreciate the constant flow of communication: the daily stand-ups, sprint planning and retrospectives. It’s forced us to really think about how we work and how we can do better. It keeps me up-to-date on all the development work; no one has to remember to tell me about a new feature or change that needs documenting because that communication is part of the process. It also gives me an opportunity to make suggestions about how the product works every time they’re working on a new fature. 

Agile isn’t necessarily less work, with all the meetings and rework, but it prevents the massive crunch at the end, and it helps to control management’s expectations because they can get a progress report every two weeks. It also helps us to produce a better product because the constant feedback from customers or those who represent the customers, including me. We haven’t quite settled on the best way to do everything yet, but in true Agile style, we’re willing to keep trying and changing.


An Epic Experience with Agile

By Debbie Kerr

While I am a technical communicator, my role on projects is as a business analyst and not as a technical writer. My focus is on working with subject matter experts (SMEs) to identify the requirements that will be used as the basis for developing software. This is part of the Agile Manifesto: to put customer collaboration over contract negotiation.

For my current project, there are epics, user stories, and detailed requirements. These requirements are part of the process of converting a paper application to a web-based one. The paper application is 30-pages long and has 23 sections, a large number of fields, associated business rules, and an extensive series of validations both within screens and between multiple screens. In general, the epics are the 23 sections that have to be completed in the application. There are approximately 15 user stories per epic and the detailed requirements are all the fields, functions, business rules, and validations. These requirements range from about 10 to 20 pages per section.

Normally, in a Waterfall project, these detailed requirements would be completed from start to finish before development begins. With this project, I write the requirements in iterations, although it can take me several cycles to complete the requirements and receive signoff. There are formal signoffs; however, once development starts, sometimes it becomes necessary to add or modify requirements. I usually capture these requirements in emails with a less formal approval process.

In addition to formal requirements there are also functional specifications that a Business System Analyst prepares after the detailed requirements are signed off. In some cases, the requirements that I write contain enough information that separate functional specifications are not required. This practice is in keeping with the Agile manifesto: to place working software over comprehensive documentation.

An Agile approach means being able to adapt quickly to change. Recently, the day before a demo was scheduled to occur, we discovered that there was a misunderstanding about how something was supposed to work. Since the development was being completed in three-week cycles and the demos for the previous cycles were successful, the amount of change required and the implementation of those changes could be done much more quickly than if it had been a Waterfall project, where the problem may not have been noticed until much later in the development stage, and even as late as the testing stage. Since retrospectives (a type of lessons learned) are completed at the end of each cycle, we were able to identify ways to improve our processes going forward so that this problem does not occur again.

As cycles are completed, a Quality Assurance (QA) Specialist tests the software that has been developed in the previous cycle. Sometimes this testing is automated so that it can be repeated multiple times over the course of several cycles.

In addition to QA, there is also User Acceptance Testing (UAT). This testing ensures that key people can use the software to complete the tasks that they normally perform in their roles. Some of this UAT is being done as part of various cycles and the rest of it will be done when all development is complete. This is considered end-to-end testing and is often associated with a Waterfall Software Development Life Cycle (SDLC). For my project, this type of testing is required because, until all 23 sections have been developed, the completion and submission of the entire application cannot be tested.

Like any project, it will be epic when everything comes together. It will be even better when it is implemented and people begin to use it.







Saturday, June 27, 2015

A primer on Agile software development

Until the introduction of Agile, the standard software development life cycle (SDLC) was a Waterfall approach, where each stage of the SDLC had to be completed before the next one could begin. Requirements had to be gathered in great detail. Design would only begin once the requirements were completed. Development began once the design was complete, and testing didn’t start until development was done. This method left those people with a vested interest in the project (stakeholders) wondering what was taking so long. There was nothing tangible to show that any progress had been made.

The basis of working Agile is the Agile manifesto from the founders of Agile: We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:
  • Individuals and interactions over Processes and tools
  • Working software over Comprehensive documentation
  • Customer collaboration over Contract negotiation
  • Responding to change over Following a plan

That is, while there is value in the items on the right, we value the items on the left more.

Let’s just establish right now that “Comprehensive documentation” is referring to planning and tracking documents, not user assistance. The point is that Agile values a product that people can use over writing about what that product is going to be or should be.

The introduction of Agile was a new way of thinking. Instead of detailed requirements, there were user stories. Each story identifies the person associated with the requirement, what the requirement is, and what this functionality will help the person to achieve. For example, purchasing agents need the ability to enter order information online so that it can be sent directly to outside vendors. This may be considered a very large user story, which is referred to as an epic. This epic can be broken into smaller user stories, which will make it easier to provide estimates and will enable work for smaller user stories to be associated with different iterations.

Some examples of smaller user stories that could be associated with this epic would be:
  • Purchasing agents must have the ability to enter a vendor’s name, address and telephone number and associate it with a vendor number so that the purchasing agent can use the vendor number when placing future orders.
  • Purchasing agents must have the ability to select from a list of standard products when placing an order so that consistent terminology is used and inventory can be tracked more effectively.
Using these user stories, the Agile team assesses and estimates the complexity of the software design and development associated with them. These estimates are in the form of points. The larger the number of points associated with a user story, the more complicated and time-consuming the design and development is believed to be.

Unlike a waterfall SDLC, which is a linear approach, Agile is iterative. Before each iteration, which is called a sprint, the Agile team agrees on the tasks they are going to complete during the sprint. The total points estimated for the completed tasks is referred to as the velocity. The planning for the next sprint should change based on what happened in previous sprints. For example, if the velocity for the previous sprint was 50 and the team had estimated they could complete 75, the team may select less user stories to complete and aim for 50 the next time. After a few sprints, the team and management can get a good picture of the velocity and predicted end date of the project, given the current scope and resources. That’s one of the benefits of Agile. The information learned during one iteration can be applied to the next iteration instead of identifying lessons learned at the end of a project so that they cannot be used until the next project.

A lot more can be said about Agile and Agile tools, but we’re hoping to just establish the background for you to understand the following series of stories. Debbie Kerr, Ursula McCloy and Fei Min Lorente are three people who work in Agile environments. This introduction will be followed by articles where each of them describe how Agile has been implemented at their work places. Although Agile provides a new paradigm for software development, it favours people over processes, making an Agile experience in one company vastly different from the next. 




Sunday, June 14, 2015

SPFE: Synthesis, Presentation, Formatting, and Encoding

By Fei Min Lorente

Mark Baker was generous enough to deliver another seminar to the Southwestern Ontario Chapter. He explained why and how to use SPFE (pronounced “spiffy”), which is a fascinating architecture for anyone—professional writers and casual contributors alike—to create individual pages and let the architecture take care of incorporating them into publications. It removes the barrier of a specialized markup language or specialized interface that someone has to learn before putting information directly into a structured and self-organizing format. It’s designed for documentation that will be published on the web.

Why you would want to do this is explained in the other seminars that Mark has presented to us: Every Page is Page One and Information Architecture—Bottom Up*! To summarize, people usually find information by searching first, which means that they won’t necessarily start reading on any particular page (hence, every page is page one). This means that the top-down organization of information is mostly irrelevant. In other words, if you try to organize information in the order you think people are going to read, you’re setting yourself up for failure. Wikipedia is Mark’s favourite example of how information should be organized for maximum usability. Each page is a topic, and all the information on a page is at the same “level”—it doesn’t mix generalities with details. For details, you can click on one of the many links that will explain what you need to know.

For more reasons to look at SPFE more carefully, imagine training your SMEs to do structured authoring, without them having to learn about the publishing system or content management. To them, it will look like they are filling out a form, with field names they can understand so that they know what to write. They will have to think about the rhetorical structure and the annotation of their subjects, but they already know this information, and are simply learning to express it formally. SPFE guides their input while saving you the time of writing everything from scratch. It enables distributed authoring, which means a shorter time to publishing the most up-to-date information.

After all, customers expect accurate and up-to-date information whenever they search for it. This means that the publishing cycle has to be instantaneous. It’s one thing to collect up-to-date information and publish it at the press of a button, it’s another thing to create and maintain all the links, especially if you’re publishing a subset of the documentation. SPFE uses the semantics of the information to create links. For example, if the SME identifies something as a feature, SPFE will go looking for another page about that feature, and automatically link to it. If there isn’t an existing page about that feature, SPFE will point out that you’ve got a gap in your documentation. You can choose whether to add that information or ignore SPFE’s advice at that point.

To help guide your SMEs with their information input, you need to customize the “form” to reflect their area of technical expertise. You might do this now with a customized DTD or schema, or by using a subset of DITA, but it would be an awkward way to do it. If you have ever implemented such a structure, you know that this isn’t an activity that you’d want to do often. SPFE, on the other hand, makes this structural customization easy so that the SMEs don’t have to learn a complex and foreign markup language, while you can harness the power of structured content with metadata. You can reuse SPFE’s existing structures, which are composed of elements, leave out the ones you don’t need, change the ones you want to customize and add new ones. You can also reuse and customize the scripts that validate and automate the linked output.

Yes, there are scripts involved. Working with SPFE is not for the faint of technical heart, but if you know a good programmer, this is the most flexible documentation system available (i.e., it’s not tied to any particular tool). Mark is using oXygen as his editor, Python to manage the build process, XSLT to process the markup, and CSS to format the output for the web; however, since everything is text-based, you can choose any tool or scripting language, as long as you follow SPFE’s general architectural design and design principles. SPFE is open-source, so you are free to use it as you need.

The concepts behind SPFE are well-reasoned, but considering it took Mark three presentations to lead us to this open architecture, it’s hard to condense the information to one blog article. If I’ve piqued your interest, I recommend taking a look at the SPFE pages on the web, starting with http://spfe.info/. Oh, sorry, I’m supposed to let you choose your own first page.

*For a more complete article about Mark’s Information Architecture—Bottom Up, see Sarah Maddox’s write up.


Sunday, April 19, 2015

Making the move into Instructional Design

Are you a technical writer? Do you find yourself looking around wondering how much longer there will be a demand for your skills in the marketplace? Have you seen your department shrink or heard of others disappear entirely, perhaps due to layoffs or outsourcing? Maybe you've started thinking, "I need to expand my skill set so I don't feel as exposed to one industry."

Look no further than instructional design, the development of instructional/training materials and activities. While tech writing still currently has a few opportunities, the future of instructional design is very good, Stephen Van Esch argued in his presentation "Moving Into Instructional Design", presented to our local STC chapter at one of our recent education evenings.

What you need to learn

So what do you need to know for instructional design? It is mastery of design concepts and principles that enables an instructional designer to create positive learning outcomes, says Stephen. He sees two key areas where you should develop your knowledge: instructional design frameworks and learning objectives.

1. Instructional design frameworks

The first step is to familiarize yourself with frameworks such as ADDIE. ADDIE (Analyze, Design, Develop, Implement, Evaluate), is an industry-accepted yet customizable instructional design framework. The knowledge of this process framework will help you plan and stay on track when developing training projects.

ADDIE makes sure you've completed all necessary steps and haven't forgotten anything (like evaluation!) Yet, since it's just a framework, ADDIE also provides flexibility and customization of steps, depending on the needs of the project. Finally, ADDIE provides the needed structure for collaboration and review with stakeholders throughout the process with tools such as storyboards, which prevents rework.


2. Learning objectives

Another essential skill is learning how to craft effective learning objectives. Learning objectives are powerful tools and the basis for successful training content. A model like Bloom's Taxonomy, which identifies six levels of learning (Remember, Understand, Apply, Analyze, Evaluate, and Create), helps instructional designers classify learning objectives and best organize content to facilitate learning.

Well-written objectives use targeted verbs for measurable learning objectives, leading to effective training that addresses the actual problem. Additionally, learning objectives help keep you on task, informing your development of training content. Referring back to them ensures that all that's needed (and only what's needed) gets included.

You can tailor learning objectives according to your training audience's needs and context of learning. For example, when developing training for the corporate world, Van Esch recommends focusing only on the three levels of Remember, Apply, and Analyze, since the corporate context tends to have a more compressed learning time and specific demands than traditional education environments.

What you don’t need to learn

You might be asking yourself: “OK great. I know what I need to learn. What else do I need to know before I jump in?” Actually, according to Stephen, it’s what you don’t need to learn that may be to your greatest advantage. As a technical communicator, you already have a lot more to bring to the table than you might realize — a baseline of skills that you can leverage when entering instructional design:
  • Communication: You know how to communicate concepts in concise and clear terms. This is your biggest advantage.
  • Working with SMEs: You know how to work and collaborate with technical experts and engineers.
  • Organizing information: You know how to organize information into logical blocks, determine a logical flow, and help ensure information findability.
  • Audience analysis: You understand how to identify and analyze your users and maintain a user-centered perspective.
  • Process: Documentation tends to be process-driven (planning, review), so you already know how to bring logic to the process of creating something.

So, as a technical writer you're already ahead! You just need to take the plunge. Once you're familiar with the frameworks and principles of instructional design (such as ADDIE, Bloom's taxonomy, and learning objectives), you can expand your existing writing skills to encompass this new field. You'll be equipped to produce training that truly hits the target, helping your learners improve their performance and get the job done.

By Bea H.


Sunday, December 14, 2014

Information Architecture -- Bottom Up!

By Greg Campbell
Mark Baker's presentation in March, Every Page is Page One, introduced the concept of bottom-up information architecture and provided useful design strategies for bringing tech comm to the web.  But one presentation was not enough, so we had him back again on November 26! Baker provided a more detailed analysis on the issues that plague top-down hierarchical structures, and how the user experience of search & hyperlinking should shape the organization of web-based information.

It is no secret that we use the search function and links to find information on the Internet. The search function is the most popular channel we use to reach content we desire and it is exactly what Bakers topic-based approach builds upon. Understanding that people use the search function is his logical first step in shaping how technical communication is experienced on the web.

To show that current methods could be better, Baker used old and new encyclopaedias to illustrate the value of topic-based organization.
The title of the image to the right is translated as figurative system of human knowledge" and is commonly known as the tree of Diderot and dAlembert. It is suppose to represent the structure of knowledge itself. To find information in here is to follow a sequential order through a family tree of information.

The tree of Diderot and d’Alembert is an example of hierarchical structures and illustrates the top-down approach to information architecture. The common denominator of the top-down approach is the linear sequencing of information.

When a Top-down organization is ported to the web, Baker says to reorganize it with a bottom-up structure. If one does not reorganize the hierarchical structure, users will need to read prerequisite information to understand the information that brought them to the page in the first place. There cannot be context if your document has 210 pages and Google drops the user in at page 78. The user will need to start reading previous pages to situate the info on page 78. Becoming aware that top-down hierarchical structures on the web is not compatible with the way we use the web, is one of the takeaways of Baker’s presentation.

To bring technical information to the web, Baker advocates a reorganization of the information into stand-alone topics. He used a typical Wikipedia page. When users look for technical information, they find it embedded in proper context without having to read previous pages. With the bottom up architecture, the context is always there because every page is the first page. At least with a bottom up architecture, the user will always land on a page that provides context and links to get you to the information you want.

The comparison between encyclopaedias organized by hierarchies and encyclopaedias organized like Wikipedia shows the value of bottom up architecture for the web. The information does not stop here - there is more to come on this topic. Baker has another presentation in the New Year that builds on information architecture bottom-up.
Stayed tuned.

Wednesday, April 9, 2014

Online File Sharing: A Technical Writer’s Perspective on Hash Checking and Encryption

By Anuradha Satish

As technical communicators, many of us use the Internet to share our work files. SharePoint, Dropbox and Google Docs are among the most common platforms we use to share work.

Recently I came upon a blog article* about Dropbox disrupting the sharing of a document. They alleged copyright infringement based on the DMCA without even looking into the contents of the file. The article described how Dropbox evaluates the legal accuracy or legitimacy of files based on hash algorithms. When a document’s hash code matches one of Dropbox’s blacklisted documents Dropbox can prevent the file from being shared without having to know what it actually contained.

The good news here is that Dropbox is not snooping into our shared files! However, an incident like this makes me wonder how accurate the hash checker is and how safe our documents are when shared online. Let’s take a closer look at what a hash code is and how it operates.

A hash code can be interpreted as a “fingerprint” – it is a unique alpha-numeric code assigned to every document stored in any cloud-shared folder. Here’s a simplified example to show how it works:

  • Document A containing 1,2,3,4 could have a hash code assigned as ar59i3nd
  • Document B containing 2,1,4,3 could have a hash code assigned as b3nj98he

Each document’s hash code serves as a unique identifier. Storage centers, such as Dropbox, use this hash code to identify the correct document. If the document is altered in any way, the hash code changes. Two versions of the same document will always have two separate hash codes. But if the document matches another document, word-to-word, it will have the same hash code.



Storage centers use hash checkers to validate data. In the case of that article that prompted this blog entry, hash checkers allowed Dropbox to prevent the sharing of a file because its hash code matched that of a black-listed document.

What are the direct implications of this to technical communication?

First, the technical communication industry itself is moving towards online and remote work. Many technical communicators, including writers, illustrators, trainers or editors, have worked or will work for remote clients and share files online. This makes our work easily prone to cyber duplication and plagiarism. It is important to be aware of this potential risk when sharing files.

Second, file encryption is becoming extremely important as more data is transmitted online. Cloud storage service providers can provide a basic level of encryption to ensure data security but, by offering this service, the provider then has the ability to access that content at any time. Instead of leaving it up to the online storage providers, we can take control of the encryption process before sharing documents online. Many free encryption tools offer online encryption services.

Third, be aware and alert once your document is shared online. If you think you have shared potentially confidential information, run a search on popular search engines. If you ever come across web pages that have copied content from your work, or contain similar enough content to make you suspect plagiarism, then you can file a DMCA* complaint with the search engine. The law requires the search engine to prohibit displaying the copyrighted content again. It is a cumbersome manual check but it will ensure that you catch the slightly-different versions of your document which is something that a hash checker will miss.

** Be aware of our laws:

  • The Digital Millennium Copyright Act (DMCA) is a United States law against copyright infringement that implements two 1996 treaties of the World Intellectual Property Organization (WIPO). It criminalizes production and dissemination of technology, devices, or services intended to circumvent measures (commonly known as digital rights management or DRM) that control access to copyrighted works. It also criminalizes the act of circumventing an access control, whether or not there is actual infringement of copyright itself. In addition, the DMCA heightens the penalties for copyright infringement on the Internet.
  • In Canada, currently it is legal to download any copyrighted file as long as it is for noncommercial use, but it is illegal to distribute the copyrighted files (e.g. by uploading them to a P2P network). Canadian law makers are proposing Bill C-61, an Act to amend the Copyright Act – a controversial Bill that is similar to the American DMCA.
* Original Blog Article
**Source: http://en.wikipedia.org