Friday, December 21, 2018

National Programme On Technology Enhanced Learning

https://www.youtube.com/user/nptelhrd

Bhandarkar Oriental Research Institute - Digital Library

http://borilib.com/repository/search/searchHome

२. महाअनुभव दिवाळी अंक २०१८

यंदाच्या दिवाळीचा वाचायला घेतलेला हा दुसरा अंक.

सुरुवातीलाच रत्नाकर मतकरींची कथा बघून जाम खुश झाले. पण हा आनंद फार काळ टिकला नाही. कथा आजकालच्या काळात वास्तव वाटावी अशी. त्यामुळे प्लॉटमध्ये फार नाविन्य नव्हतं पण शेवटी एखादा ट्वीस्ट असेल असं वाटलं होतं, तो नव्हता त्यामुळे निराशा झाली. :-(

पुढचा लेख कोलकात्यामधल्या सांडपाण्यावर नैसर्गिकरित्या प्रक्रिया करून त्यावर मासे आणि भात पिकवण्याचा चमत्कार करून दाखवणाऱ्या बिनखर्चाच्या प्रणालीवरचा रिपोर्ताज. निसर्गावर आणि पर्यावरणावर प्रेम करणाऱ्या कोणाही माणसाने वाचावा असाच. आपली आणखी शहरं हा कित्ता गिरवतील तर किती छान होईल. अर्थात आहे तीच प्रणाली उध्वस्त करायचे करंटे प्रयत्न बिल्डरांच्या लॉबीकडून होताहेत हे वाचून असं काही घडण्याची शक्यता धूसरच दिसते म्हणा. आपले डोळे कधी उघडणार देव जाणे!

मी गुणगुणसेन' हा अनिल अवचट ह्यांचा लेख भावला ते त्यांनी शास्त्रीय गायकीचं रूढार्थाने शिक्षण न घेता ती कशी समजून घेता येते ह्याचं वर्णन केलंय त्यामुळे. गेली कित्येक वर्षं मी ह्याबाबतीत काहीतरी करावं, निदान ह्या विषयाची किमान तोंडओळख असावी असे बेत करतेय पण अद्याप काही प्रगती नाही. हा लेख वाचून याबाबतीत पुन्हा प्रयत्न करावेसे वाटू लागलेत. कोणी सांगावं थोडी प्रगती होईलही. ‘एका फळाचा प्रसाद' ही कथा वेगळ्या ओळख नसलेल्या क्षेत्रासंबधात असल्याने आवडली. ‘विवियन मेयरच्या शोधात' हा नितीन दादरावाला ह्यांचा लेख एका अश्या महिलेची ओळख करून देतो जिने सहज म्हणून अनेक छायाचित्रं काढली आणि ती तिच्या मृत्यूपूर्वी २ वर्ष आधी निव्वळ योगायोगाने जगापुढे आली. त्या फोटोजमधून १९५०-७० सालच्या अमेरीकेचचं नव्हे तर जगातल्या बऱ्याच देशांचं दर्शन घडतं. ऐकावं ते नवलच! गौरी कानेटकरांचा 'सुपरकेव्ह्ज' वरचा लेख सुपरडुपर आवडला. जमलं तर ह्या विषयावर नेटवर आणखी माहिती मिळवून वाचेन.

अवेळीच जेव्हा दाटला अंधार' हा दोन्ही किडनी फेल झालेल्या नवरयाच्या आजारपणात बायकोने दिलेल्या लढ्याची कहाणी सांगणारा लेख वाचून लेखिका वृषाली जोगळेकरांना अक्षरश: साष्टांग नमस्कार घालावासा वाटला. रुग्णांना दिलासा देण्याऐवजी त्यांच्या नातेवाईकांना धावाधाव करायला लावणारी आपली मेडिकल आणि लीगल सिस्टीम खरोखर धन्य आहे. एव्हढं करून गैरप्रकार होतात आणि ते करणे निवांत असतात. चोर सोडून संन्याशाला सूळ! तरी 'सर सलामत तो पगडी पचास' ह्या म्हणीचा अर्थ नव्याने उमगला हा लेख वाचून. ‘सोयरिक' हे रंगनाथ पाठारे ह्यांचा 'सातपाटील' ह्या आगामी कादंबरीतील प्रकरण रोचक वाटलं. ही कादंबरी लायब्ररीत येते का ते पाहायला हवं. तसंच 'ताडोबाचे सगेसोयरे' हा दीनानाथ मनोहर ह्यांनी स्वर्गीय बाबा आमटेंच्या सोमनाथ प्रकल्पातल्या प्राण्यांवरचा लेखही छान आहे. ‘आनंदवन' ला जायचं आहे हे पुन्हा एकदा प्रकर्षाने जाणवलं. कोणाही सामान्य माणसाप्रमाणे रेल्वेगाडीचं प्रचंड आकर्षण असल्यामुळे गणेश कुलकर्णींचा 'ये दुनिया तुफान मेल' हा लेख मस्त वाटला. हिंदी चित्रपटसृष्टीमधल्या रेल्वे, स्टेशन वगैरेंच्या चित्रणाबद्दल बरीच मनोरंजक माहिती मिळते. पण लेखाच्या शेवटी केकी मूस आणि त्यांच्याकडे असलेला एक कुत्रा ह्यांच्याबद्दल वाचून खूप चुटपूट लागते. त्या कुत्र्याचा मालक त्याला शोधत का नाही आला कोणास ठाऊक :-(

दंडकारण्यात रुजतंय लोकांचं राज्य' हा दीप्ती राऊत ह्याचा लेख वाचून खूप समाधान वाटलं. नक्षलवाद्यांचं प्राबल्य असलेल्या भामरागड आणि आजूबाजूच्या भागात लोकांच्या सहभागाने विकासाचं काम कसं सुरु झालंय ह्याची प्रेरणादायक माहिती ह्यातून मिळते. आपली वर्तमानपत्रं नकारात्मक बातम्या देण्याऐवजी हे असले लेख का नाही छापत?

सुहास पळशीकर ह्याचा आजकालच्या नको त्या कारणांसाठी Angry होणार्या, धुमसत्या समाजावर लिहिलेला लेख थोडा वाचला. पण पुढेपुढे तो जास्तच गंभीर होत गेला त्यामुळे सोडून दिला. पूर्वी मी इंटरेस्ट नसलेले लेखही नेटाने वाचत असे. जणू काही ते मध्येच सोडून देणं पाप. आजकाल मिळालेला प्रत्येक क्षण कारणी लावायचा अशी प्रतिज्ञा केल्याने (आणि ती अजूनपर्यंत टिकून असल्याने!) जे आवडत नाही ते वाचत नाही. राजेश्वरी देशपांडेंचा लोकसत्ताच्या दिवाळी अंकातला लेख फारसा आवडला नसल्याने ह्या अंकातला त्यांचा लेखही सुरुवातीचे काही परिच्छेद वाचून स्किप केला. ‘कायद्याचं राज्य: अपेक्षा आणि अडचणी', 'श्याम मनोहर: जगण्यात मजा येत नाहीय' हे लेखही ह्याच कारणाने वाचले नाहीत. ‘स्वीकारलेली सक्तमजुरी' हा लेख बहुधा महाअनुभवच्या दर महिन्याच्या अंकातल्या एखाद्या लेखमालिकेतला पुढचा भाग असावा असं वाटलं कारण स्वतंत्र लेख म्हणून त्याची संगती लागत नाही

म्हणजे एकंदरीत मागच्या वर्षीच्या मानाने थोडी निराशा झाली हे खरं. पण तरी पैसा वसूल अंक.
 

Healthy eating, meal kit sites

https://ichef.in/

https://www.flaxitup.com/home

https://burgundybox.in/products/served

https://thediet.in/

https://hellogreen.in/




Monday, December 10, 2018

Michael Bloomberg. Bloomberg by Bloomberg.

Eric Evans. Domain-Driven Design: Tackling Complexity in the Heart of Software.

Michael Feathers. Wor ki ng Effectively with Legacy Code.

Lisa Gregory and Janet Crispin. Agile Testing: A Practical Guide for Testers and Agile Teams

Dan Heath and Chip Heath. Made to Stick: Why Some Ideas Survive and Others Die

Steve McConnell. Software Estimation: Demystifying the Black Art

David Schmaltz. The Blind Men and the Elephant

Data Crunching: Solve Everyday Problems using Java, Python, and More

From Java To Ruby: Things Every Manager Should Know

Interface Oriented Design

Manage It! Your Guide to Modern Pragmatic Project Management

Software estimation: demystefying the black art

Driving Technical Change: Why People On Your Team Don’t Act On Good Ideas, and How to Convince Them They Should Terrence Ryan

Stephen R. Covey. First Things First.

Gerard Meszaros. xUnit Test Patterns: Refactoring Test Code

The Survivor's Club - Ben Sherwood

Performing Under Pressure by Hendrie Weisinger & Dr. J. P. Pawlin-Fry

(Source Loksatta)





(Source Loksatta)


(Source Loksatta)

१. लोकसत्ता दिवाळी अंक २०१८

दिवाळी होऊन २ महिने होत आले तरी घेतलेल्या अंकांपैकी एकही वाचायचा मुहूर्त लागला नव्हता. तो मागच्या आठवड्यात लागला. तेव्हा आधी लोकसत्ताचा अंक वाचून काढायचं ठरवलं.

नेहमीच्या शिरस्त्याप्रमाणे आधी यादी आवडलेल्या लेखांची. प्रथम उल्लेख करावा लागेल तो 'काश्मीर: आरपारची लढाई' ह्या महेश सरलष्कर ह्यांच्या रिपोर्ताजचा. आजकाल काश्मीरबद्दल बरंच काय काय ऐकू येतं. त्यात 'काश्मिर हा भारताचा अविभाज्य भाग आहे' हे वाक्य नेहमीचंच. त्यातून आपलं सर्वांचं इतिहासाचं ज्ञान म्हणजे 'अगाध' ह्या एकाच शब्दाने वर्णन करता येण्यासारखं. ह्या अज्ञान आणि गैरसमजाच्या धुक्याला एखाद्या लेझरबीमसारखा हा लेख छेदत जातो. काश्मीर भारतात 'सामील' झालं ही समजूतच मुळात कशी चुकीची आहे, तिथल्या पंडितांच्या स्थलांतराचा मुद्दा नेमका काय आहे आणि ह्या प्रश्नाला कसे वेगवेगळे पैलू आहेत हे समजून घ्यायची इच्छा असणाऱ्या साऱ्यांनी हा लेख वाचणं अतिशय गरजेचं आहे.

'मेडिसिन मर्चंट्स' हा मृदुला बेळे ह्यांचा लेख बहुराष्ट्रीय फार्मा कंपन्या, त्यांचे रिसर्च आणि औषधांच्या किमती ह्या सर्व बाबींवर प्रकाश टाकतो. व्होटसएपच्या माध्यमातून मला कोणीतरी हा फोरवर्ड केला होता. पण तो इतका प्रबोधक आहे की मी पुन्हा संपूर्ण वाचला. ‘जागतिकीकरण आणि उजवी लाट' हा विशाखा पाटील ह्यांचा लेख हुकुमशाहीकडे वाटचाल करणाऱ्या जगातल्या देशांबद्दल चांगली माहिती देतो. अर्थात मोदीभक्तांनी तो वाचायच्या भानगडीत पडू नये हेच उत्तम.

सामान्य मराठी माणसाप्रमाणे तुम्हीही नाटकवेडे असाल तर 'तिसरी घंटा घणघणतेय' हा कमलाकर नाडकर्णी ह्यांचा, ‘प्रायोगिक रंगभूमीची वळणे' हा माधव वझे ह्यांचा आणि 'दिग्दर्शकाची रंगभूमी' हा विजय केंकरे ह्यांचा लेख वाचणं इज अ मस्ट. काय ताकदीचे नट/नट्या आणि नाटकं पूर्वी होऊन गेले हे वाचून अचंबा वाटतो. तसंच हे आपल्याला पाहायला मिळालं नाही ह्याची फार चुटपूटही लागते.

'मनातल्या वाफ्याची कहाणी' हा विजय पाडळकर ह्यांचा लेख दिग्दर्शक इंगमार बर्गमनची, प्रशांत कुलकर्णींचा लेख अमेरिकन व्यंगचित्रकार गेरी लार्सनची आणि वीणा गवाणकर ह्यांचा लेख इस्त्रायलच्या गोल्डा मेयर ह्यांची सुरेख ओळख करून देतात. गवाणकरांचं ह्यावरचं आगामी पुस्तक वाचायला हवं. देशोदेशीच्या खाद्यसंस्कृतीमध्ये झालेल्या बदलांवरचा सचिन कुंडलकर ह्यांचा लेख काही वेगळे विचार मांडतो. ‘काही पाने' ही आसाराम लोमटे ह्यांची कथा आवडली पण मध्येच संपल्यासारखी वाटली. ‘डेटा सायन्स' ह्या विषयात गम्य असल्याने संहिता जोशींचा 'विदा: आजचं सोनं' हा लेख आवडायला हवा होता. पण 'डेटा' ह्या शब्दाला 'विदा' हा पर्यायी मराठी शब्द अजिबात पटला नाही. तरी गुगलला आपल्याविषयी सर्व माहिती होऊ नये म्हणून त्याला चकवायला अधूनमधून अजिबात आवड नसलेल्या विषयांवर उगाच सर्च करायची कल्पना आवडली मला. ती अंमलात नक्कीच आणेन. त्यासाठी लेखिकेचे आभार. विवेक शानभाग ह्यांची 'कारण' आवडली पण शेवटी एखादा ट्वीस्ट असेल असं वाटलं होतं त्यामुळे थोडी निराशा झाली. शफाअत खान ह्यांची 'डेरिंग मंगताय' आवडली.

त्यामानाने राजेश्वरी देशपांडे आणि नागराज मंजुळे ह्यांचे ‘राजकारणावर आधारलेले मराठी चित्रपट’ ह्या विषयावरचे लेख फारसे आवडले नाहीत. देशपांडेंचा लेख तर अतिशय विस्कळीत वाटला. ‘इमिटेशन गेम' वरचा महेंद्र दामले ह्यांचा आणि ‘अभिव्यक्तीच्या आधाराची काठी' हा अभिजित ताम्हणे ह्यांचा असे दोन्ही लेख सुरुवातीलाच इंटरेस्टीन्ग न वाटल्याने पूर्ण वाचले नाहीत.’आनंदात घाबरलेपण' ही श्याम मनोहर ह्यांची कथा झेपली नाही. लेखकाला काय सांगायचंय हे कळलंच नाही मला. ग्रामीण भागातलं, विशेषत: शेतकरी समाजाचं, दु:ख आजकाल रोज वर्तमानपत्रातून आपल्यापर्यंत पोचतं. एक सामान्य नागरिक असल्याने आपल्याला त्याबद्दल काही ठोस करता येण्यासारखं नसतंच. उगाच मानसिक त्रास होतो. त्यामुळे 'चपाटा' ही कथा वाचली नाही.

सारांश काय तर सुरुवात चांगली झालेली आहे. वाचनासाठी योग्य अंक निवडल्याचं समाधान काही वेगळंच असतं. :-)

Tuesday, November 27, 2018

QBQ - John Miller

It's Our Ship - Captain Michael Abrashoff

The Survivor's Club - Ben Sherwood

Blink - Malcolm Gladwell

Leadership Is Half The Story - Hurvitz & Hurvitz

The Minerva Reef - Olaf Ruhan

The Art Of Positive Politics - Vijay Verma

Managing Stakeholder Expectations For Project Success - Ori Schibi

Thanks For The Feedback - Douglas Stone & Sheila Heen

Behave: The Biology Of Humans at Our Best and Worst - Robert Sapolsky

Blind Descent: James M. Tabor

Saturday, November 24, 2018

Confessions Of A Thug - Meadows Taylor

A Naga Odyssey

India: Empire And First World War Culture - Santanu Das

The Great War: Indian Writings On The First World War - Rakhshanda Jalil

Indian Empire At War - George Morton-Jack

India's Most Fearless - Shiv Aroor

Marry Me, Stranger - Novoneel Chakraborty

The Scam: From Harshad Mehta to Ketan Parekh - Debashis Basu & Sucheta Dalal

The Goat Thief - Perumal Murugan

Selection Day - Aravind Adiga

The Bard Of Blood - Bilal Siddiqi

Why I Am A Hindu- Shashi Tharoor

Leadership is half the story - Marc Hurwitz & Samantha Hurwitz

Wednesday, October 24, 2018

Alistair Cockburn. Agile Software Development: The Cooperative Game.

Rick Mugridge and Ward Cunningham. Fit for Developing Software: Framework for Integrated Tests

Andy Oram and Greg Wilson, editors. Beautiful Code: Leading Programmers Explain How They Think

G. Pascal Zachary. Show Stopper!: The Breakneck Race to Create Windows NT and the Next Generation at Microsoft.

Alex's Adventures In Numberland - Alex Bellos

The Man Who Knew Infinity - Robert Kanigel

Significant Figures - Ian Stewart



(Source Loksatta)


(Source Loksatta)

Debug It! Find,Repair,and Prevent Bugs in Your Code - by Paul Butcher

Notes from the book:

The core of the debugging process consists of four steps:

1. Reproduce: Find a way to reliably and conveniently reproduce the problem on demand.

2. Diagnose: Construct hypotheses, and test them by performing experiments until you are confident that you have identified the underlying cause of the bug.
3. Fix: Design and implement changes that fix the problem, avoid introducing regressions,and maintain or improve the overall quality of the software.
4. Reflect: Learn the lessons of the bug.Where did things go wrong?Are there any other examples of the same problem that will also need fixing? What can you do to ensure that the same problem doesn’t happen again?

The things you need to control break down into three areas:

- The software itself: If the bug is in an area that has changed recently, then ensuring that you’re running the same version of the software as it was reported against is a good first step.

- The environment it’s running within: If interaction with an external system (some particular piece of hardware or a remote server perhaps) is involved, then you probably want to ensure that you’re using the same.

- The inputs you provide to it: If the bug is related to an area that behaves very differently depending upon how the software is configured, then start by replicating the user’s configuration.

Ensure that your reproduction is both reliable and convenient through iterative refinement:

- Reduce the number of steps, amount of data, or time required.
- Remove nondeterminism.
- Automate.

The scientific method can work in two different directions.1 In one case, we start with a hypothesis and attempt to create experiments, the results of which will either support or refute it. In the other, we start with an observation that doesn’t fit with our current theory and as a result modify that theory or possibly even replace it with something completely different.
In debugging, we almost always start from the latter.Our theory(that the software behaves as we think it does) is disproved by an observation(the bug) that demonstrates that we are mistaken.

1.    Examine what you know about the software’s behavior, and construct a hypothesis about what might cause it.
2.    Design an experiment that will allow you to test its truth(or otherwise).
3.    If the experiment disproves your hypothesis, come up with a new one, and start again.
4.    If it supports your hypothesis, keep coming up with experiments until you have either disproved it or reached a high enough level of certainty to consider it proven

Instrumentation is code that doesn’t affect how the software behaves but instead provides insight into why it behaves as it does. e.. logging

Once you have found the source of bug, there might be some changes that you had made during the diagnosis phase that you perhaps want to retain. check out fresh copy of your code. then follow sequence as follows

1. Run the existing tests, and demonstrate that they pass.
2. Add one or more new tests,or fix the existing tests,to demonstrate the bug(in other words,to fail)
3.Fix the bug.
4. Demonstrate that your fix works(the failing tests no longer fail).
5. Demonstrate that you haven’t introduced any regressions (none of the tests that previously passed now fail).

Bug fixing involves three goals:

1. Fix the problem.
2. Avoid introducing regressions.
3. Maintain or improve overall quality(readability, architecture, test coverage, and so on) of the code.

Two golden rules:

1. Refactor but never at the same time as modifying functionality
2. One logical change, one checkin

Make it obvious how to report a bug: Place instructions (or better yet, a direct link) to how to report a bug in your software’s About dialog box, online help, website, and anywhere else you think appropriate.
Automate: Install a top-level exception handler, and give the user the option to file a bug report that automatically contains all the relevant details.

Keep it simple: Each action you ask your users to perform will reduce the number who complete a transaction by half. In other words, ask them to click three times, and only 12.5 percent of them will complete. Five times, and you’ve reduced that figure to a little more than 3 percent.
Don’t have too rigid a template: It can be a good idea to have a standard template for bug reports, but beware of making that template too strict. Make sure that you have sensible options for each field including “none of the above.”

Automate environment and configuration reporting to ensure accurate reports.
Aim for bug reports that are - specific, unambiguous, detailed, minimal and unique.

To deal with a poor quality codebase:
1. have the following in place - source code control system, automatic build process, continuous integration, automated testing
2. separate clean code from unclean and keep it clean
3. prioritise bugs
4. incrementally clean up code by putting tests in place and refactoring

Add identifying compatibility issues to your bug-fixing checklist.

Addressing Compatibility Issues

Provide a Migration Path
Give your users some way to modify their existing data, code, or other artifacts to fit in with the new order, such as a utility that converts existing files so they work correctly with the new software,for example.
It might be possible to automate this so that data is automatically upgraded during installation. Make sure that you both test this carefully and save a backup, though—your users will not thank you if the upgrade fails and destroys all their data in the process.
Implement a Compatibility Mode
Alternatively,you can provide a release that contains both the old and new code,together with some means of switching between them.Users can start by using the compatibility mode, which runs the old code, and switch to the new after they’ve migrated. Ideally this switch is automatic—when the software detects an old file, for example

Microsoft Word is a good example of this approach. When it opens an old file(with a .doc extension),it does so in a compatibility mode (see Figure 8.1). Save that file in the new format(.docx), and Word’s behavior, and possibly your document’s layout, changes.

This is not a solution to be adopted lightly. It’s very high cost, both for you and for your users.

From your point of view, it does nothing for the quality of the code. From the user’s point of
view, it’s confusing—they need to understand that the software supports two different behaviors, what the differences are, and when each is appropriate.Turn to it only if this cost is truly justified.

Provide Forewarning
If you know that you’re going to have to make a significant change but don’t have to make it immediately, you can provide users with forewarning that they will eventually need to migrate.

Of course, this works only if you can afford to delay your fix for long enough to enable your users to migrate—and whether your users do migrate.

It is an excellent idea to incorporate performance tests into your regression test suite. They might run representative operations on large data sets and report if the time taken falls outside of acceptable bounds,for example.

It can even be worth having tests that fail when things become unexpectedly faster. If a test suddenly runs twice as fast after a change that shouldn’t have affected performance noticeably, that can also indicate a problem. Perhaps some code you were expecting to be executed isn’t any longer?

When patching an existing release concentrate on reducing risk consider compatibility implications when fixing bugs fix performance bugs only after accurate profiling

There’s more to effective automated testing than simply automating your tests. To achieve maximum benefit, your tests need to satisfy the following goals:

1. Unambiguous pass/fail: Each test outputs a single bit—pass or fail.No shades of gray, no qualitative output, no interpretation required. Just a simple yes or no.

2. Self-contained: No setup required before running a test.Before it runs, it sets up whatever environment it needs automatically, and just as important, it undoes any changes to the environment afterward, leaving everything as it found it.
3. Single-click to run all the tests: All tests can be run in one step without interfering with each other. As with a single test, the output of the complete test suite is a simple pass or fail—pass if every test passes, fail otherwise.
4. Comprehensive coverage: It’s easy to prove that achieving complete coverage for any nontrivial body of code is prohibitively expensive.But don’t allow that theoretical limitation to put you off—it is possible to achieve close enough to complete coverage as to make no practical difference.

Mocks and stubs are often confused. Stubs are passive, simply responding with canned Mocks are active; stubs data when called, whereas mocks are active, are passive. validating expectations about how and when they are called.

⦁    Branch as late as possible. It may be tempting to create your stabilization branch well in advance(after all,if some stabilization is good, more must be better?),but the chances are that the productivity you lose by doing so isn’t worth it.
⦁    Stick to a single level of branching. If you find yourself branching your branches,you know that you’re in trouble

⦁    Setup your continuous integration server to build all the branches that are actively being worked on.

⦁    Check in small changes often. Small changes are easier to understand, merge, and roll back if necessary.

⦁    Make only those changes that really need to be in the branch in the branch.
⦁    Merge from the branch to the trunk,not the other way around.The branch represents released software, so a problem in the branch is likely to have more severe consequences than a problem in the trunk.
⦁    Merge changes from branch to trunk immediately while the change is fresh jn your kind.
⦁    Keep an audit trail to know which changes were merged and when

So, it’s a good idea to have a build machine that is used to make release builds (possibly several build machines if you’re working on cross-platform software). It should always be kept pristine and not be used for anything else so that you can trust that it’s in the right state.
Whenever you make a release, you need to make sure that you keep a record of what source was used to create that release.
If you do have problems with tests that take too long to run, consider creating a suite of short tests that you can run for every check-in, as well as running the full suite overnight.
So, the first rule is to use static analysis. Switch on all of the warnings supported by your compiler and get hold of any other tools that might prove useful in your environment.

The second rule is to integrate your chosen tool or tools tightly into your development process. Don’t run them only occasionally—when you’re looking for a bug, for example. Run them every single time you compile your source. Treat the warnings they generate as errors,and fixt hem immediately. Integrate static analysis into every build.

Contracts,Pre-conditions,Post-conditions, and Invariants
One way of thinking about the interface between one piece of code and anotheris as a contract.The calling code promises to provide the called code with an environment and arguments that confirm to its expectations. In return, the called code promises to carry out certain actions or return certain values that the calling code can then use.
It’s helpful to consider three types of condition that, taken together, make up a contract:
Pre-conditions: The pre-conditions for a method are those things that must hold before it’s called in order for it to behave as expected. The pre-conditions for our addHeader()method are that its arguments are nonempty, don’t contain invalid characters, and so on.
Post-conditions:The post-conditions for a method are those things that it guarantees will hold after it’s called (as long as its pre-conditions were met). A post-condition for our addHeader() method is that the size of the headers map is one greater than it was before.
Invariants: The invariants of an object are those things that(as long as its method’s pre-conditions are met before they’re called) it guarantees will always be true—that the cached length of a linked list is always equal to the length of the list, for example.
If you make a point of writing assertions that capture each of these three things whenever you implement a class, you will naturally end up with software that automatically detects a wide range of possible bugs.

Evaluating assertions takes time and doesn’t contribute anything to the functionality of the software(after all,if the software is functioning correctly, none of the assertions should ever do anything). If an assertion is in the heart of a performance critical loop or the condition takes a while to evaluate, it is possible to have a detrimental effect on performance.

A more pertinent reason for disabling assertions, however, is robustness. If an assertion fails, the software unceremoniously exits with a terse and (to an end user) unhelpful message.

Have the best of both worlds—robust production software i.e. software bat will work even in the presence of bugs and fragile development/debugging software o.e. with assert statements.

assert s != null : "Null string passed to allUpper" ;
if (s == null)
return false;

As with many tools, assertions can be abused. There are two common mistakes you need to avoid—assertions with side effects and using them to detect errors instead of bugs.
An assertion’s task is to check that the code is working as it should, not to affect how it works. For this reason, it’s important that you test with assertions disabled as well as with assertions enabled. If any side effects have crept in, you want to find them before the user does.

Errors may be undesirable, but they can happen in bug-free code. Bugs, on the other hand, are impossible if the code is operating as intended.

Here are some examples of conditions that almost certainly should not be handled with an assertion:
⦁    Trying to open a file and discovering that it doesn’t exist
⦁    Detecting and handling invalid data received over a network connection
⦁    Running out of space while writing to a file
⦁    Network failure

Error-handling mechanisms such as exceptions or error codes are the right way to handle these situations.

Be very suspicious of any proposal to rewrite. Perform a very careful cost/benefit analysis. Sometimes the old code really is so terrible that it’s not worth persevering with it, but take the time to prove this to yourself.

If you do decide to go down this road,minimize your exposure as much as possible. Try to find a way to rewrite the code incrementally instead of in a “big bang.”

Test against the existing code, and verify that you get the same results.Be particularly careful to find the corner cases that the existing code handles correctly and that you need to replicate.

Sunday, October 7, 2018

Mike Clark. Pragmatic Project Automation: How to Build, Deploy, and Monitor Java Applications

Capers Jones. Estimating Software Costs

Donald Ervin Knuth. Literate Programming. Center for the Study of Language and Information,

Robert C. Martin. Agile Software Development, Principles, Patterns, and Practices


The Mars Room - Rachel Kushner

Kashmit: Glimpses of History and the Story of Struggle - Saifuddin Soz

Practices Of An Agile Developer - Venkat Subramaniam & Andy Hunt

My notes from this book:

Fixing the problem is the top priority., not finding out who caused it. Measuring compliance to process doesn’t measure outcome. Agile teams value outcome over process.

If you approach someone for help and get a less than professional response, you can try to salvage the conversation. Explain exactly what you want, and make it clear that your goal is the solution, not the blame/credit contest.

If one team member misunderstood a requirement, an API call, or the decisions reached in the last meeting, then it’s very likely other team members may have misunderstood as well. Make sure
the whole team is up to speed on the issue.

Ask a leading question that allows someone to figure out the problem for themselves.

If you’re having a design meeting, or are just having trouble getting to a solution, set a hard deadline such as lunchtime or the end of the day. That kind of time boxing helps keep the team moving and keeps you from getting too hung up on an endless ideological debate.

At the start of a meeting, pick a mediator who will act as the decision maker for that session.
Once a solution is picked (by whatever means), each team member should switch gears and give their complete cooperation in seeing it through to implementation.

Criticize ideas, not people. Take pride in arriving at a solution rather than proving whose idea is better.



Before setting out to find the best solution, it might be a good idea to make sure everyone agrees on what best means in this context.

Invest in team learning by picking up a topic, asking someone to present it and having discussion on its pros and cons and applicBility to your project. keep these sessíons shott but frequent so people can keep  up. so stick to a regular schedule.

It is eaually impoftant to unlearn useless old habits and knowledge as it is to acquire new.

Keep asking Why. Don’t just accept what you’re told at face value. Keep questioning until you understand the root of the issue.

A hard deadline forces you to make the hard choices. You can’t waste time on philosophical discussions or features that are perpetually 80% done. A time box keeps you moving.

Developers, managers, or business analysts shouldn’t make business-critical decisions.
Present details to business owners in a language they can understand, and let them make the decision.

Design should only be as detailed as needed to implement

There are two levels of design: strategic and tactical. The up-front design is strategic: you typically do that when you don’t yet have a deep understanding of the requirements. That is, it should express a general strategy but not delve into precise details.

This up-front, strategic level of design shouldn’t specify the details of methods, parameters, fields, or the exact sequence of interaction between objects. That’s left to the tactical design, and it unfolds only as the project evolves

Instead of starting with a tactical design that focuses on individual methods or data types, it’s more appropriate to discuss possible class designs in terms of responsibilities, because that is still a high-level, goal-oriented approach. In fact, the CRC card design method does just that. Classes are described in terms of the following:
• Class name
• Responsibilities—what is it supposed to do
• Collaborators—what other objects it works with to get the job done

How can you tell whether a design is good or even adequate? The best feedback on the nature of design comes from the code. If small changes in requirements remain easy to implement, then it’s a good design. If small changes cause a large disruption or cause a disruption across a large swath of the code base, then the design needs improvement.

A good design is accurate, but not precise. That is, what it says should be correct, but it shouldn’t go far as to include details that might change or that are uncertain.

There’s a simple workflow to follow to make sure you don’t check in broken code:

Run your local tests. Begin by making sure the code you’re working on compiles and passes all of its unit tests. Then make sure all of the other tests in the system pass as well.

Check out the latest source. Get the latest copy of the source code from the version control system, and compile and test against that. Very often, this is where a surprise will show up: someone else may have made a change that’s incompatible with yours.

Check in. Now that you have the latest version of code compiling and passing its tests, you can check it in.

Deploy your application automatically from the start.
Use that deployment to install the application on arbitrary machines with different configurations to test dependencies. QA should test the deployment as well as your application.

Deploying an emergency bug fix should be easy, especially in a production server environment. You know it will happen, and you don’t want to have to do it manually, under pressure, at 3:30 a.m.

The user should always to be able to remove an installation safely and completely—especially in a QA environment.

Identify core features that’ll make the product usable, and get them into production—into the
hands of the real users—as soon as possible.

An alternative to fixed price contrcts:

1.Offer to build an initial, small, useful portion of the system (in
the construction analogy, perhaps just the garage). Pick a small enough set of features such that this first delivery should take no more than six to eight weeks. Explain that not all the features will make it in but that enough will be delivered so that the users could actually be productive.

2. At the end of that first iteration, the client has two choices: they can agree to continue to the next iteration, with the next set of features; or, they can cancel your contract, pay you only for the
few weeks worth of work you’ve done, and either throw it away or get some other group to take it and run with it.

3. If they go ahead, you’re in a better position to forecast what you can get done during the next iteration. At the end of the next iteration, the client still has those same two choices: stop now, or go on to the next.

Being agile doesn’t mean “Just start coding, and we’ll eventually know when we’re done.” You still need to give a ballpark estimate, with an explanation of how you arrived at it and the margin of
error given your current knowledge and assumptions

You might also consider a fixed price per iteration set in the contract while leaving the number of iterations loose, perhaps determined by ongoing work orders (a.k.a. “Statement of Work”).

Unit testing is only as effective as your test coverage. You might want to look at using test coverage tools to give you a rough idea of where you stand.

If your domain experts give you algorithms, calculations, or equations, provide them with a way of testing your implementation in isolation. Make those tests part of your test suite—you want to make sure you continue to provide the correct answers throughout the life of the project

Consider performance, convenience, productivity, cost, and time to market. If performance
is adequate, then focus on improving the other factors. Don’t complicate the design for the sake of perceived performance or elegance..

As the caller, you should not make decisions based on the state of the called object and then change the state of that object. The logic you are implementing should be the called object’s responsibility, not yours. For you to make decisions outside the object violates its encapsulation
and provides a fertile breeding ground for bugs.

A helpful side technique related to Tell, Don’t Ask is known as command-query separation. The idea is to categorize each of your functions and methods as either a command or a query and document them as such in the source code (it helps if all the commands are grouped together and all the queries are grouped together).

A routine acting as a command will likely change the state of the object and might also return some useful value as a convenience. A query just gives you information about the state of the object and does not modify the externally visible state of the object.

That is, queries should be side effect free as seen from the outside world (you may want to do some pre-calculation or caching behind the scenes as needed, but fetching the value of X in the object should not change the value of Y).

Tell, don’t ask. Don’t take on another object’s or component’s job. Tell it what to do, and stick to your own job.

Liskov’s Substitution principle tells us that “Any derived class object must be substitutable wherever a base class object is used, without the need for the user to know the difference.” In other words, code that uses methods in base classes must be able to use objects of derived classes without modification.

To comply with the Substitution principle, your derived class services (methods) should require no more, and promise no less, than the corresponding methods of the base class; it needs to be freely substitutable. This is an important consideration when designing class inheritance hierarchies.

When using inheritance, ask yourself whether your derived class is substitutable in place of the base class. If the answer is no, then ask yourself why you are using inheritance. If the answer is to reuse code in the base class when developing your new class, then you should probably use composition instead. Composition is where an object of your class contains and uses an object of another class, delegating responsibilities to the contained object (this technique is also known
as delegation).

Maintain a log of problems faced and solutions found.

Here are some items that you might want to include in your entries:

• Date of the problem
• Short description of the problem or issue
• Detailed description of the solution
• References to articles, and URLs, that have more details or related information
• Any code segments, settings, and snapshots of dialogs that may be part of the solution or help you further understand the details

Keep the log in a computer-searchable format. That way you can perform a keyword search to look up the details quickly.

Treat warnings as errors. Checking in code with warnings is just as bad as checking in code with errors or code that fails its tests. No checked-in code should produce any warnings from the build tools.

Emphasize collective ownership of code. Rotate developers across different modules and tasks in different areas of the system.

Set a time limit for how long anyone on the team can be stuck on a problem before asking for help. One hour seems to be a pretty good target.

What should you look for during a code review? You might develop your own list of specific issues to check (all exception handlers are nonempty, all database calls are made within the scope of a transaction, and so on), but here’s a very minimal list to get you started:

• Can you read and understand the code?
• Are there any obvious errors?
• Will the code have any undesirable effect on other parts of the application?
• Is there any duplication of code (within this section of code itself or with other parts of the system)?
• Are there any reasonable improvements or refactorings that can improve it?

In addition, you might want to consider using code analysis tools. If that sort of static analysis proves useful to you, make these tools part of your continuous build.

Review code after each task, using different developers.

Code reviews are useless unless you follow up on the recommendations quickly. You can schedule a follow-up meeting or use a system of code annotations to mark what needs to be done and track that it has been handled.

Always close the loop on code reviewers; let everyone know what steps you took as a result of the review.

Tuesday, October 2, 2018

https://www.amazon.com/Do-More-Faster-TechStars-Accelerate-ebook/dp/B0046H9BBM

https://www.amazon.com/Startups-Open-Sourced-Stories-inspire/dp/0615491928/

https://www.amazon.com/Hackers-Painters-Big-Ideas-Computer/dp/1449389554

https://www.amazon.com/Eric-Business-Software-Experts-Voice/dp/1590596234

https://www.amazon.com/Words-that-Sell-Revised-Expanded-ebook/dp/B0062Y5V4I

https://www.amazon.com/Anything-You-Want-Derek-Sivers/dp/1936719118/

https://copyhackers.com/shop/

https://www.amazon.com/Ikigai-ebook/dp/B006M9T8NI/

https://www.amazon.com/Dip-Little-Book-Teaches-Stick/dp/1591841666/

https://www.amazon.com/Personal-Development-Smart-People-Conscious/dp/1401922767/

https://www.amazon.com/This-Little-Program-Went-Market/dp/0615345832/

https://www.amazon.com/Software-That-Sells-Practical-Developing/dp/0764597833/

https://www.amazon.com/Program-Product-Turning-Saleable-Experts/dp/1590599713/

https://www.amazon.com/Slack-Getting-Burnout-Busywork-Efficiency/dp/0767907698/

https://www.amazon.com/Secrets-Consulting-Giving-Getting-Successfully/dp/0932633013

Monday, October 1, 2018

Tom DeMarco. Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency

Peter Drucker. Managing for Results

Roger Fisher, William Ury, and Bruce Patton. Getting to Yes

Ron Jeffries, Ann Anderson, and Chet Hendrickson. Extreme Programming Installed

Stephen R. Covey, A. Roger Merrill, and Rebecca R. Merrill, First Things First

G. Pascal Zachary, Show-Stopper!

Michael Schrage, No More Teams! Mastering the Dynamics of Creative Collaboration

Paul G. Bassett, Framing Software Reuse: Lessons from the Real World

Tarek Abdel-Hamid and S. E. Madnick, "Lessons Learned from Modeling the dynamics
of software project management,"

Tarek Abdel-Hamid, "Thinking in Circles,"

Chi Y. Lin, "Walking on Battlefields: tools for strategic software management,"

Brad Smith, Nghia Nguyen, and Richard Vidale, "Death of a Software Manager: how to avoid career suicide though dynamic software process modeling,"

Laurence Peter, The Peter Principle

MARCUS BUCKINGHAM & CURT COFFMAN - FIRST, BREAK ALL THE RULES WHAT THE WORLD’S GREATEST MANGERS DO DIFFERENTLY

Practical Issues in Database Management: A Reference for the Thinking Practitioner by Fabian Pascal

An Introduction to Database Systems by Chris Date

Gyato University, Dharamshala






Dal Lake, Dalhousie







Saturday, September 29, 2018

A Man Called Ove - Fredrik Backman

The Lives Of A Cell - Lewis Thomas

Born A Crime - Trevor Noah

One Part Woman - Perumal Murugan

What Money Can't Buy: The Moral Limit Of Markets - Michael Sandel

Friday, September 28, 2018

Jon R. Katzenbach and Douglas K. Smith. The Wisdom of Team: Creating the High-Performance Organization.

The Toyota Way - Jeffrey Liker

Johanna Rothman. Hiring the Best Knowledge Workers, Techies, and Nerds: The Secrets and Science of Hiring Technical People.

Johanna Rothman. Manage It!: Your Guide to Modern Pragmatic Project Management. The Pragmatic Programmers,

Robert C. Solomon and Fernando Flores. Building Trust in Business, Politics, Relationships, and Life.

Keith Sawyer. Group Genius: The Creative Power of Collaboration.

Preston G. Smith and Donald G. Reinertson. Developing Products in Half the Time: New Rules, New Tools

R. Brian Stanfield, ed. The Art of Focused Conversation, 100 Ways to access Group Wisdom in the Workplace

Steve Tockey. Return on Software: Maximizing the Return on Your Software Investment.

Allen C. Ward. Lean Product and Process Development

Gerald M. Weinberg. Psychology of Computer Programming

Gerald M. Weinberg. Weinberg On Writing: The Fieldstone Method

James P. Womack and Daniel T. Jones. Lean Thinking

Tim Mackinnon, Steve Freeman, and Philip Craig. Endo-testing: Unit testing with mock objects.

Giancarlo Succi and Michele Marchesi, Extreme Programming Examined

Pragmatic Programmers. Pragmatic Automation

Jeremy Epstein - It’s ALL on the Blog, DON’T Buy the Book

Andrew McAfee and Erik Brynjolfsson Machine, Platform, Crowd 

The Attention Merchants, Tim Wu

Pragmatic Unit Testing- Andrew Hunt & David Thomas

This book is from 2003 so a bit dated but I read it for concepts. And I found plenty of them.

First of course is a handy way to remember the six specific areas to test - RIGHT -BICEP! Before you go trouping to the friendly neighborhood gym to lift the barbells, here is the breakdown -

Right . Are the results right?

B . Are all the boundary conditions CORRECT?

I . Can you check inverse relationships? For instance you might check a method that calculates a square root by squaring the result, and testing that it is tolerably close to the original number You might check that some data was successfully inserted into a database by then searching for it, and so on. Of course you have to guard against the possibility that there could be a common error in original routine and its inverse, thus giving correct results. So if possible, use a different source for the inverse test.

C . Can you cross-check results using other means?

E . Can you force error conditions to happen?

P . Are performance characteristics within bounds? e.g. time taken to execute method as data size grows. So execute method with data of different sizes and check that the time taken is within acceptable limits.

Right. Next, we all can vividly recall an incident or two when the developer forgot to test a boundary condition, resulting in much heartburn all around. So here's a handy way to get it right - the acronym CORRECT.

Conformance . Does the value conform to an expected format?
Ordering . Is the set of values ordered or unordered as appropriate?
Range . Is the value within reasonable minimum and maximum values?
Reference . Does the code reference anything external that isn't under direct control of the code itself?
Existence . Does the value exist (e.g., is non-null, nonzero, present in a set, etc.)?
Cardinality . Are there exactly enough values? 0-1-n rule
Time (absolute and relative) . Is everything happening in order? At the right time? In time? Are there any concurrency issues? What is the order of calling sequence of methods? What about timeouts?

Another important thing to be kept in mind is when to use mock objects. The book mentions list by Tim Mackinnon:

- The real object has nondeterministic behavior (it produces unpredictable results; as in a stock-market quote feed.)
- The real object is difficult to set up.
- The real object has behavior that is hard to trigger (for example, a network error).
- The real object is slow.
- The real object has (or is) a user interface.
- The test needs to ask the real object about how it was used (for example, a test might need to check to see that a callback function was actually called).
- The real object does not yet exist (a common problem when interfacing with other teams or new hardware systems).

The three key steps to using mock objects for testing are:
1. Use an interface to describe the object
2. Implement the interface for production code
3. Implement the interface in a mock object for testing

Then there are properties of good tests i.e. A-TRIP - Automatic, Thorough, Repeatable, Independent and Professional (encapsulation, DRY principle, lowering coupling etc.)

For those who are wondering about where to keep the test code, the author provides a few suggestions:

1. The first and easiest method of structuring test code is to simply include it right in the same directory alongside the production code. Though this will allow classes to access each other's protected members for testing purpose, it clutters production code directory and special care might need to be taken while preparing releases.The next option is to create test subdirectories under every production directory.This gets test code out of the way but takes away access to protected members. So you will have to make test class subclass of the class that it wants to test.

2. Another option is to place your Test classes into the same package as your production code, but in a different source code tree. The trick is to ensure that the root of both trees is in the compiler's CLASSPATH. Here the code is away from production code and yet has access to its protected members for testing.

Following advice needs to be kept in mind as you go about testing:

1. When writing tests, make sure that you are only testing one thing at a time. That doesn't mean that you use only one assert in a test, but that one test method should concentrate on a single production method, or a small set of production methods that, together, provide some feature.

2. Sometimes an entire test method might only test one small aspect of a complex production method.you may need multiple test methods to exercise the one production method fully.

3. Ideally, you'd like to be able to have a traceable correspondence between potential bugs and test code. In other words, when a test fails, it should be obvious where in the code the
underlying bug exists.the per-test setup and teardown methods and the per-class setup and
teardown methods.

4. When you find bugs that weren't caught by the tests, write tests to catch them in future. This can be done in 4 steps - Identify the bug. Write a test that fails, to prove the bug exists. Fix the code such that the test now passes. Verify that all tests still pass

5. Introduce bugs and make sure that the tests catch them.

6. Most of the time, you should be able to test a class by exercising its public methods. If there is significant functionality that is hidden behind private or protected access, that might be a warning sign that there's another class in there struggling to get out. When push comes to shove, however, it's probably better to break encapsulation with working, tested code than it is to have good encapsulation of untested, non-working code.

7. Make the test code an integral part of the code review process. So follow this order:

- Write test cases and/or test code.
- Review test cases and/or test code.
- Revise test cases and/or test code per review.
- Write production code that passes the tests.
- Review production and test code.
- Revise test and production code per review

8. While coding if you can't answer this simple question - how am I going to test this - take it as a signal that you need to review your design.

9. Establish up-front the parts of the system that need to perform validation, and localize those to a small and well-known part of the system.

10. Check input at the boundaries of the system, and you won't have to duplicate those tests inside the system. Internal components can trust that if the data has made it this far into the system, then it must be okay.

11. Cull out individual tests that take longer than average to run, and group them together somewhere. You can run these optional, longer-running tests once a day with the build, or when you check in, but not have to run them every single time you change code.

12. Unit tests should be automatic on two fronts - they should be run automatically and their results should be checked automatically. The goal remains that every test should be able to run over and over again, in any order, and produce the same results. This means that tests cannot rely on anything in the external environment that isn't under your direct control. Use Mock objects. Tests should be independent from the environment and from each other.

Finally, following list can be handy in checking for potential gotchas involving checked-in code:

- Incomplete code (e.g., checking in only one class file but forgetting to check in other files it may depend upon).
- Code that doesn't compile.
- Code that compiles, but breaks existing code such that existing code no longer compiles.
- Code without corresponding unit tests.
- Code with failing unit tests.
- Code that passes its own tests, but causes other tests elsewhere in the system to fail

Wednesday, September 26, 2018

Manage Your Project Portfolio - Johanna Rothman

I just finished reading this book and I am glad I read it because it gave me a lot of valuable tips about finishing key projects on time.Just jotting down important points here for future reference......

The fundamental thing to keep in mind is that unless you have information on all the work that you are expected to do you cannot possibly think of finishing the most important things on time. So the first step is to create a project portfolio. Nothing fancy is needed. Just one bucket each for next 4 weeks + one bucket each for next 3 months as columns i.e. total 7 columns. List each of your resources as rows. And in the cell list what project and what feature that resource will be working for that duration. Another key thing to remember is that portfolio monitoring can best be done in an agile environment. Waterfall methodology doesn't provide data needed till the whole work is done....or not done. Following 5 types of work needs to be included - periodic (i.e. done every week or month or quarter), ongoing work, emergency work, management work and project work.

Then at a set frequency (every month or quarter), this folio needs to be reviewed. You have to make one of the following decisions for each project - Commit (this is full commitment, not partial), kill or transform. The projects to be committed to can be ranked based on point system (out of a total, say 10000, points keep assigning points to projects and staff them in the order from highest to lowest), risk or cost (in both these cases a few iterations would be needed to give some idea about the velocity so total cost or risk can be projected) or customers for whom the projects will be delivered (business value). Other ways of arriving at ranking are pairwise comparison, single and double elimination and your product's position in the market place ( features needed to be added to close the chasm between early adopters and early majority rank high).

Important thing to be kept in mind is that you should not do the ranking alone - do it with peers so you will consider all aspects or at least more of them along with doing what's best for your organization. Of course, you need to run it by your superiors for a final confirmation. And don't rank based on ROI, unless you are a custom development group. Also keep in mind that sometimes some internal projects like e.g. fixing the build or auto-testing system can rank higher than external projects because the payoff will be big. If necessary, consider the organization mission, if you have any, do help narrow down the priority of the projects. Chapter 11 on drafting mission - be it your group's or organization's - is a must-read even if the company that you work for has a formal mission.

Review quarterly at least for serial development. For iterative or incremental development, review every time a feature chunk is done or a prototype is completed. You need a demo and data i.e. the team's velocity since last such evaluation to make a decision about the fate of the project. Also you need to take into consideration, project obstacles and organization strategy. So have the right people at the meeting who have this data and authority to make decisions.

So basically it will help to ask following four questions at the evaluation meeting - does the project fit the organization strategy? What's been done since last evaluation (demo will answer this)? How is the team places with respect to the product backlog (velocity and backlog burndown chart answer this)? What obstacles is the team facing (this will give you a measure of risk involved in continuing the project in the same way). If your organization tracks project cost, you’ll need to also know the run rate (the cost of the project per unit time), the total project cost, and possibly the monthly/quarterly/yearly project cost data.

Other trigger points to review your portfolio are - when release dates change, when your customers want something new, when your competitors announce or release new products, or when new technology is possible. Since these events cannot be predicted in advance, a bit of flexibility is needed as far as folio evaluation goes. Allocate time for exploratory work when you do portfolio evaluation.

The ideal time to review the portfolio is:

• When a project finishes something you can see (the project cycles)

• When you have enough information about the next version of a product (the planning cycles) 

• When it’s time to allocate budget and people to a new project (the business cycles)

Rothman gives two more golden rules. One that we project managers have known since the dawn of time -  that multi-tasking does more harm than good. And two, make decisions as late as possible so you get time to consider many aspects of the situation and changing scenarios to make the best decision. A note about budgeting - Budget target is set for a year but funds are allocated for 3 months. Have a fixed budget for a fixed time and see how many features can be developed.

Keeping a parking lot of projects is also another good idea. The book describes two more concepts - using Kanban and Minimum Marketable Features (MMF). There is some material on fixing the queue length, work size and work cost but I am in favor of fixing the timebox duration. I don't think the other 3 things can be fixed in any meaningful way in an average project.

Project managers often wonder which are the best measures to a project's health. Rothman provides the answer in a no-nonsense way.The only two things that need to be measured are team velocity (current and historical) and product backlog burndown chart. Another thing that can be measured is cumulative flow i.e. work in progress over time compared to the total project scope. The more work in progress, the less this project has provided value to the organization.

There are a couple of points from the book that I find rather far from being practical such as not taking time to put in place an overall architecture but rather letting it evolve as the iterations proceed. You will need a highly sophisticated and experienced team to do that - something which most of us cannot have. Another one, of course, is working with colleagues to make sure everyone is pulling in the direction that's best suited to the organization's goals. This is easier said than done. There is a lot of politics out there and it is not easy to navigate that to make it possible for people to even consider such a notion. Measuring team productivity instead of an individual's sounds about right theoretically but not practically.

That said, I think this is a book worth keeping on your book-shelf if you happen to be responsible for managing a project and a team.