web developer and learner. Love to talk about society, education and religion.(I don't pray though) Wish to travel a bit more. started reading again after 8 yrs
29 stories
·
1 follower

Compaq and Coronavirus

1 Comment and 3 Shares

To live in a moment that will be in history books is not a particularly pleasant experience; history, though, has another cruelty: those that are not remembered at all.

Compaq’s Impact

Consider Compaq: it was one of the most important companies in tech history, and today it is all-but forgotten. For example, look at this brief history of the IBM PC I wrote in 2013:

You’ve heard the phrase, “No one ever got fired for buying IBM.” That axiom in fact predates Microsoft or Apple, having originated during IBM’s System/360 heyday. But it had a powerful effect on the PC market.

In the late 1970s and very early 1980s, a new breed of personal computers were appearing on the scene, including the Commodore, MITS Altair, Apple II, and more. Some employees were bringing them into the workplace, which major corporations found unacceptable, so IT departments asked IBM for something similar. After all, “No one ever got fired…”

IBM spun up a separate team in Florida to put together something they could sell IT departments. Pressed for time, the Florida team put together a minicomputer using mostly off-the shelf components; IBM’s RISC processors and the OS they had under development were technically superior, but Intel had a CISC processor for sale immediately, and a new company called Microsoft said their OS – DOS – could be ready in six months. For the sake of expediency, IBM decided to go with Intel and Microsoft.

The rest, as they say, is history.

But wait, there was one critical part of this story that I excluded! IBM wasn’t completely stupid: while much of the IBM PC was outsourced, the BIOS — Basis Input/Output System, which was the firmware that that actually turned on the PC hardware and loaded the operating system — was copyrighted, and, IBM presumed, defensible in court. Compaq, though, figured out how to reverse-engineer the BIOS anyways. Rod Canion, who co-founded Compaq, explained on the Internet History Podcast:

What our lawyers told us was that, not only can you not use it [the copyrighted code] anybody that’s even looked at it — glanced at it — could taint the whole project. (…) We had two software people. One guy read the code and generated the functional specifications. So, it was like, reading hieroglyphics. Figuring out what it does, then writing the specification for what it does. Then, once he’s got that specification completed, he sort of hands it through a doorway or a window to another person who’s never seen IBM’s code, and he takes that spec and starts from scratch and writes our own code to be able to do the exact same function…

[We had] just a bull-headed commitment to making all the software run. We were shocked when we found out none of our competitors had done it to the same degree. We could speculate on why they had stopped short of complete compatibility: It was hard. It took a long time. And there was a natural rush to get to market. People wanted to be first. There was only one thing for us: we didn’t have a product if we couldn’t run the IBM-PC software. And if you didn’t run all of it, how would anyone be confident enough to buy your computer, if they didn’t know they were always going to be able to run new software? We took it very, very seriously.

The result was a company that came to dominate the market; in fact, Compaq was the fastest startup to hit $100 million in revenue, then the youngest firm to break into the Fortune 500, then the fastest company to hit $1 billion in revenue. By 1994 Compaq was the largest PC maker in the world.

Compaq’s Virtualization

Canion was, by that point, long gone; the board had ousted him in 1991 when the company was struggling to compete with direct-to-consumer PC makers selling “good enough” computers that were not nearly as well-engineered as Compaqs, but were faster to market and much cheaper. New CEO Eckhard Pfeiffer introduced the low-cost Presario line, which leveraged cheaper parts to break the sub-$1,000 price point, leading to Compaq achieving that first place position. By 1996, though, growth was again slowing, and Pfeiffer needed a new plan. Part 1 was expanding into more markets; Bloomberg explains part 2:

The second part of the formula — for producing profits along with growth — will involve wider use of outsourcing and partnership deals. That’s because the new financial yardstick — return on assets — will force the divisions to slash investment in assets such as plant, inventory, and overhead wherever possible. If the $3 billion home-PC business can cut its asset base, for instance, it can still deliver a 20% annual return to the company — even though price competition in home PCs will likely keep operating margins at around 2%.

To get there, Compaq has already started “virtualizing” parts of its business. After cutting $57 off the cost of each home PC last year by building the chassis at its plant in Shenzhen, China, the company went a step further in cutting the cost of business desktop PCs: Instead of investing millions to expand the Shenzhen plant, Gregory E. Petsch, senior vice-president for operations, persuaded a Taiwanese supplier to build a new factory adjacent to Compaq’s to build the mechanicals for the business models. The best part of the deal: The Taiwanese supplier owns the inventory until it arrives at Compaq’s door in Houston. “This is the right way to do it,” says Sanford C. Bernstein & Co. computer analyst Vadim D. Zlotnikov.

It worked for a time: Compaq’s stock price surged over the next two years as the company rode the Internet wave and outsourced not only the building of PCs and eventually their design, but also their new businesses:

To compete in the big-iron business profitably, Compaq is counting on a series of relationships with other companies that can supply the kind of handholding that companies such as IBM are famous for. Instead of investing in legions of field technicians and programmers — and building up costly assets — the computer maker will use the resources of systems integrator Andersen Consulting and software maker SAP, among others. These companies have the personnel to install and maintain systems the way IBM or HP do. So Compaq gets to play in the big-iron market without incurring the costs of running its own services or software businesses. Using these partners, Compaq is already delivering packages of networks, servers, and services to big customers including General Motors, British Telecommunications, First Interstate Bancorp, and Deutsche Bundespost.

Compaq, however, may not be able to play through their intermediaries forever. “The real solution is to create your own capability. It takes longer and is more painful, but ultimately, it is more successful,” says Graham Kemp, president of G2 Research Inc.

Compaq never did bother; the engineering determination exemplified by Canion was long gone, and soon Compaq was as well: the company merged with HP in 2002 (resulting in a huge destruction in shareholder value), served as the badge for HP’s cheapest computers for a decade, and in 2012 was written down completely for $1.2 billion.

And no one even noticed.

Coronavirus Action

Compaq’s demise was, to be fair, first and foremost about the value chain within which it competed. The entire reason why Compaq could build the business it did was because as long as you had an IBM-compatible BIOS, an x86 processor, and a license for Windows, you could sell a PC that was compatible with all of the software out there. That, though, meant commoditization in the long-run, which is exactly what happened to Compaq and, it should be noted, basically all of its competitors.

Still, while I could not ascertain exactly which Taiwanese manufacturer it was that Compaq persuaded to build its PCs and hold them on its balance sheet, I suspect there is a good chance it is still in business: companies like Quanta and Compal took over PC manufacturing in the 1990s, and PC design entirely in the 2000s. Brand names were simply that: names, and not much more. This, of course, made for a fantastic return on assets; it was not so great for long-term sustainable revenue and profits.

It is at this point, 1400+ words in, that I must make what is probably an obvious analogy to the historical moment we are in. While there may have been an opportunity to stop SARS-CoV-2 late last year, by January (when the W.H.O. parroted China’s insistence that there was no human-to-human transmission), worldwide spread was probably inevitable; the New York Times brilliantly illustrated the travel patterns that explain why.

Since then, though, there has been divergence between countries that acted and countries that talked. Taiwan, where I live, is perhaps the best example of the former; Dr. Jason Wang wrote an overview of Taiwan’s actions (and published a list of 124 action items), including:

  • Passengers on flights from Wuhan were screened for fever starting in December, and banned from entry in January; the rest of Hubei Province, and then China as a whole — including non-Chinese who had recently visited China — soon followed.
  • Data from the National Immigration Agency was integrated into the National Health Insuance Administration, allowing officials to quickly match-up COVID-19 symptoms with recent travel history; full access was given to hospitals in late February.
  • People designated for home quarantine are tracked via their smartphones, and fined heavily for any violations.

What stood out to me was mask production; on January 23, the day that China locked down Wuhan, Taiwan had the capability of producing 2.44 million masks a day; this week Taiwan is expected to exceed 13 million masks a day, a sufficient number for not only medical workers but also the general public. The mobilization bridged government, industry, and workers, and is ongoing — the plan is for Taiwan to be able to export masks soon.

The public has done its part as well: most restaurants and buildings check the temperature of anyone who enters, and far more people than usual are wearing said masks, which worked to stop the spread of SARS in 2003, and which are likely particularly effective in the case of asymptomatic carriers of SARS-CoV-2.

The Great Resignation

The contrast with Western countries is stark: to the extent government officials across the Western world were discussing the coronavirus a month ago, it was to express support for China or insist that life carry on as before; I already praised the role Twitter played in sounding the alarm — often in the face of downplaying from the media — but even that was, by definition, talk. What does not appear to have happened anywhere across the West is any sort of meaningful action until it was far too late.

This has resulted in two problems: first, by the time Western governments acted, the only available option has been widespread lockdowns. Second, the talk itself is missing even the possibility of action. For example, over the last 48 hours there has been increasing discussion about trade-offs, specifically the trade-off between limiting the spread of the coronavirus and the halt in economic activity that is required to do so. Given how much I write about tradeoffs, I must surely consider this a good thing, no?

In fact, I think it is incredibly tragic, but not for the reasons you might think. The fact of the matter is that we do make tradeoffs between human lives and economic activity all the time — speed limits are perhaps the most banal example. What is truly tragic is the utter lack of resolve and lack of a bias for action in this so-called tradeoff. The only options are to give up the economy or give into the virus: the possibility of actually beating the damn thing is completely missing from the conversation. To put it another way, the West feels like Compaq in the 1990s, relying on its brand name and partnerships with other entities to do the actual work, forgetting that it was hard work and determination that made it great in the first place.

The best overview of how actual hard work could make a difference was written by Tomas Pueyo in this article entitled The Hammer and the Dance; to briefly summarize, the idea is to lockdown now to stop the uncontrolled spread of SARS-CoV-2, and then leverage the same sort of epidemilogical tools that countries like Taiwan have, including aggressive quarantining of known infections and extensive contact tracing.

This gets to the second reason why the current discussion of tradeoffs is so disappointing: not only is it debating a tradeoff that we don’t necessarily need to make, at least in the long run, it is also foreclosing discussions on tradeoffs we absolutely need to consider. Consider this picture:

Police scooters checking on a quarantined citizen

That was taken by me, outside of my apartment building; apparently one of my neighbors just returned from America and the police were checking on his home quarantine. In fact, look more closely at what Taiwan has done to contain SARS-CoV-2 to-date — you can reframe everything in a far more problematic way:

  • Restrict international movement and close borders (including banning all non-resident foreigners this week)
  • Integrate and share private data across government agencies and with hospitals.
  • Track private individual movements via their smartphones.

Even the mask production I praised required requisitioning private property by the government, and the refusal of local businesses to serve customers without masks or insist on taking their temperature is probably surprising to many in the West.

And yet, life here is normal. Kids are in school, restaurants are open, the grocery stores are well-stocked. I would be lying if I didn’t admit that the rather shocking assertions of government authority and surveillance that make this possible, all of which I would have decried a few months ago, feels pretty liberating even as it is troubling. We need to talk about this!

Policing Talk

The first problem of being a society of talk, not action, is the inability to even consider hard work as a solution; the second is a blindness to the real trade-offs at play. The third, though, is the most sinister of all: if talk is all that matters, then policing talk becomes an end to itself.

I know, for example, that I am going to get pushback on this Article, telling me to stick in my lane, and leave discussions of the coronavirus to the experts or government officials. Never mind that so many of those experts and officials have made mistake after mistake — it’s all in the memory hole now!

This is not at all to say that non-experts have the answers either; as I wrote last week the amount of misinformation is exploding. Rather, the point is that this is a situation with an unmatched-in-my-lifetime combination of massive uncertainty with unfathomable stakes. It follows, then, that the liklihood of any one person or entity having the correct answer is low, while the imperative to allow the right answer to bubble up — or, more accurately, be discovered step-by-step, idea-after-discarded-idea — is high. There is more value than ever in verifying or disproving ideas and information, and far more danger than ever in policing them.

Moreover, if the real tradeoffs to consider are about trading away civil liberties — which is exactly what has happened in Taiwan, at least to some extent — then the imperative to preserve debate about these matters is even more important. The most precious civil liberty of all is the ability to talk. Indeed, that is the terrible irony of losing the capability and will for action: it ultimately endangers the only thing we seem to good at, and in this case, the potential writedown to too terrible to consider.

Read the whole story
karambir
10 days ago
reply
Good one
New Delhi, India
Share this story
Delete

How to put your kid in Scratch

2 Shares

I was asked by a few parent friends how I put my kid in Scratch, so here's my guide to how to put (and animate!) your kid in scratch ala:

1. Photos

I used my phones camera to capture these pictures. He changed his pose as I took each picture. You should try to find a relatively visually quite surrounding for the picture. Here's the original photos I used:

It also helped that there was contrast between his clothes and the background around him.

Then transfer these pictures to a computer so you can upload them to a site to remove the background.

2. Removing the background of the photo

Visit www.remove.bg in your browser, then one at a time, click "select a photo" and upload the photos you took, then download the newly generated photo.

The site will also handle correctly rotating the image for you:

How does it work? Machine Learning commonly known as AI. Using a metric tonne of images that have already been categorised to identify what's the subject and what's the background, this data is then used on new images to distinguish background from foreground.

Note that your photos are not stored on the site's servers and not used for the AI training. More under "Do you use my data to train your AI".

Process each photo then head over to scratch for the last part.

3. Adding to scratch

You're going to create a sprite that has multiple costumes using the images you've created.

3.1. Create a new sprite

From the bottom right, you need to hover over the sprite icon and select "Upload sprite"

Now select one of the processed photos.

Don't resize the image or reposition it at this point - you're going create the sprite and all it's different "costumes" (the positions your kid is posing) and then you'll be able to re-position and resize the sprite later.

Select the sprite and select the "Costumes" tab (towards the top left of the screen). You should see this:

3.2 Adding kid positions

For more poses, you need to create more costumes for the sprite. Hover over the icon in the bottom left and select "Upload Costume" from the menu (second from the top).

Keep uploading each processed photo you took, until you have something like this:

It's also worth giving the costumes a name that's memorable, for instance "kick" or "pre-kick" etc.

3.3 Making the sprite move

Switch over to the "Code" panel, and with your kid's sprite selected, you're going to make the sprite cycle through some costumes when a key is pressed.

These blocks tell the sprite: when the space key is pressed, immediately change to the "pre-kick" costume, then wait 0.1 seconds, then change to "kick" then wait 0.2 seconds, then return to the "ready" costume. The effect is he completes a kick.

  • "when [space] is pressed" is found in the Events blocks
  • "switch costume to [ … ]" is in the Looks blocks
  • "wait [x] seconds" is in the Control blocks

You can also add your own background or other sprites and add more key presses for different events to occur, like jump, or run left or run right, etc.

This final stage is where you can make adjustments to the size and position of the sprite (rather than in the costumes panel). In this case, my kid has been made to be 120% large and rotated slightly.

Have fun!

Originally published on Remy Sharp's b:log

Read the whole story
karambir
354 days ago
reply
New Delhi, India
Share this story
Delete

Deploy From a Private Github Repo

1 Share

How do you automatically deploy from Github if your repo is private? The first time I had to this it was a nightmare, simply because the Github documentation assumes that all of its users possess a knowledge base bordering on the absurd. The reality is that setting this up is easy once you’ve seen how it all works. This setup should take you around 15 minutes to complete.

Summary

A git command-line program comes pre-installed on most/all Linux distributions and Mac OS operating systems, and you’re possibly already familiar with using git to clone a public repo. You’ll be pleased to know that this same git program also understands alternative syntax for private repos, providing you with the means to pass an SSH private key in order to gain access to and clone the repo, just like you’ve done previously with your public repos. This is great news if you’re trying to implement a seamless continuous integration process for a deployment pipeline that includes a private repo. But, this introduces some questions:

  • Where is the SSH key for my private GitHub repo?
  • Where do I save the SSH key on my deployment server so that GitHub receives it?
  • How do I add my SSH private key to a “git clone” command?

The short answers:

Deployment Keys: For private repos Github provides an additional configuration page where you can manage deployment keys. But strangely, GitHub doesn’t automatically generate the SSH key for you. In fact, when you click their button to add a deployment key you get a blank screen like the following.

GitHub (somewhat optimistically) assumes that you know how to create your own SSH key, which you would then paste into this box. The steps that follow will walk you thru how to do this. It’s easy btw. The procedure that follows will help you create an SSG key pair. That is, you’ll create a public key and a private key; each in its own file. You’ll store the public key in this GitHub “Add Deployment Key” page.

Where to save the SSH key on your deployment server: you’ll store the private key in the root of your home folder on your deployment server — assumed to be some kind of Linux server — in ~/.ssh/. There are details however. Keep reading, please.

Command-line Syntax: it’s not intuitive, but it’s easy to understand once someone has shown it to you. Keep reading, please.

Work Flow

1. Create an SSH key pair from any computer.

For a long time I’d been under the understanding that some higher power created SSH keys, and that these were passed down from the heavens to mere mortals like myself. So boy was I surprised to learn about a standard Linux command named ssh-keygen that churns these out in a matter of a few seconds. GitHub actually provides very good documentation on how to use this tool to create the SSH key pair that you’ll need in order to create a deployment key.

I’m republishing GitHub’s exact instructions (for Linux and Mac), along with my own annotations, for your convenience.

  1. Open Terminal.
  2. Paste the text below, substituting in your GitHub email address.
    $ ssh-keygen -t rsa -b 4096 -C "your_email@example.com"

    This creates a new ssh key, using the provided email as a label.

    > Generating public/private rsa key pair.
  3. When you’re prompted to “Enter a file in which to save the key,” press Enter. This accepts the default file location.
    > Enter a file in which to save the key (/Users/you/.ssh/[A reasonable name].id_rsa): [Press enter]
  4. At the prompt, type a secure passphrase. For more information, see “Working with SSH key passphrases”.
    > Enter passphrase (empty for no passphrase): [Type a passphrase (if you want), or leave blank]
    > Enter same passphrase again: [Type passphrase again (if you provided one above)]

You might also benefit from knowing that ssh-keygen will create two distinct files; one with the public key data and another with the private key data. You can name these files anything you want.

The public key will have the following form:

Whereas the private key will have this form:

2. Create a GitHub deployment key

GitHub also provides good documentation on creating Deployment Keys for a private repo. At least, their documentation makes sense for anyone who already has experience creating said keys. You should definitely refer to their documentation. The short story is that you need to copy/paste the public key from the previous step into the large text box below.

3. Create a .ssh/config file on your deployment server

Everything in this section happens on your deployment server. That is, the server on which you want to clone the contents of your private repo. First, copy the contents of your private key to a new file of the same name, located in ~/.ssh/.

Afterwards, create/edit an SSH config file to create a custom profile for accessing your private GitHub repo. The SSH config file stores data about, among other things, which private key to associate with your private repo. The git command line syntax for cloning a private repo assumes that you have done this, and I assure you that after having created this file, you will magically acquire the ability to clone your private repo from the command line.

vim ~/.ssh/config

Add the following to your config file:

Host my-amazing-private-repo
HostName github.com
User git
IdentityFile ~/.ssh/github-theme.id_rsa
IdentitiesOnly yes

After saving this file you should be able to execute a git clone command of the following form:

git clone git@my-amazing-private-repo:[your GitHub account name]/[your private repo name].git

For example, if I were executing this for a private repo named “machine-learning-mojo” in my own personal GitHub account then the command would take the following form:

git clone git@my-amazing-private-repo:lpm0073/machine-learning-mojo.git

I hope you found this helpful. Please help me improve this article by leaving a comment below. Thank you!

The post Deploy From a Private Github Repo appeared first on Lawrence McDaniel.

Read the whole story
karambir
365 days ago
reply
New Delhi, India
Share this story
Delete

10 Milestones in the History of Mathematics according to Nati and Me

1 Share

In 2006, the popular science magazine “Galileo” prepared a special issue devoted to milestones in the History of several areas of science and Nati Linial and me wrote the article about mathematics Ten milestones in the history of mathematics (in Hebrew). Our article had 10 sections highlighting one or two discoveries in each section.

Here are our choices. What would you add? what would you delete?

 

  

The list

1) Numbers and Number Systems – The Irrationality of the square root of 2

Discovery No.1: the square root of 2 is not a rational number.

 

2) Geometry, the Discovery of Non-Euclidean Geometry, and Topology

Discovery no.2(A): Euclidean Geometry

Discovery no.2(B): Non-Euclidean Geometry

 

3) Algebra, Equations and Mathematical Formulas. Galois Theory.

Discovery no.3:  Abel-Galois Theorem: there is no solution with radicals to the general equation of the fifth degree and above.

4) Analysis and the Connection to Physics

Discovery no. 4(A): Differential and integral calculus (Isaac Newton, Gottfried Leibniz, 17th Century).

Discovery no. 4(B): The analysis of complex functions (Augustin-Louis Cauchy, Bernhard Riemann, 19th century).

 

Nati Linial

5) Proofs and their Limitations: Logic, Set Theory, the Infinity, and Gödel’s Incompleteness Theorem.

Discovery no. 5(A): There are various kinds of infinity. For example, there are more real numbers than natural numbers.

Discovery no. 5(B): Gödel’s Incompleteness theorem: A mathematical theory broad enough includes true statements that cannot be proven.
 

6)  Linear Algebra, Linear Programming and Optimization

Discovery no. 6(A): The Gauss elimination method for solving systems of linear equations.

Discovery no. 6(B): Linear programming and the Simplex algorithm for solving it.

 

7) Probability Theory and the Bell curve

Discovery no. 7: The Bell Curve and the Central Limit Theorem

 

 

8) Prime Numbers and their Density

Discovery no. 8: The Prime Number Theorem.

 

9) Algorithms, Digital Computers and their Limitations

Discovery no. 9 (a): The theory of computability. Undecidable problems.

Discovery no. 9 (b): Computational Complexity theory. The theory of NP-complete problems.
 

10)  Applied Mathematics

Discovery no. 10: Additional paradigms of mathematical research beyond the paradigm of theorem/proof. Numerical methods, simulations, scientific computation and the development of mathematical models.

 



Read the whole story
karambir
375 days ago
reply
New Delhi, India
Share this story
Delete

How to radically simplify bug reporting in GitLab

1 Share

If you’re like us, you’re constantly pushing out new features and improvements to your product, but with those updates and changes comes the inevitable risk of bugs. The best way to find and fix those bugs are your internal reporters and developers, but getting the whole team to report bugs into GitLab can be hard.

Whether it’s your copywriters on the lookout for wonky content, your QA testers that find a broken form, designers that spot a font size five times too big, or your customer support team receiving word that a billing issue is blocking customers from paying – reporters can take forever to send actionable feedback to developers, who in turn don’t always get the information they need to smash those bugs.

What a bug-reporting workflow usually looks like …

… for reporters

Because reporters aren’t always super tech-savvy, it can be tricky for them to share reports that are helpful for your developers. The process is long, complicated, and tracking down the crucial technical information isn’t always easy.

In most teams, reporting bugs into GitLab looks like this:

  1. Find the bug.
  2. Open screenshot tool, capture bug.
  3. Open software to annotate screenshot, add comments.
  4. Open and log into GitLab.
  5. Select the correct project.
  6. Create new issue.
  7. Document the bug. (How exactly do I do this!?)
  8. Add technical information. (What is this even?)
  9. Attach screenshots.
  10. And then finally: submit report.

That’s a whopping 10 steps to report even the smallest bugs.

And we didn’t even mention the super-fun scavenger hunt reporters have to go on to identify all of the environmental data developers need to even start thinking about fixing the bugs.

… for developers

Developers get feedback flying at them in all forms – emails, phone calls, sticky notes and screenshots.

They’re ready to gouge their eyes out because they can’t reproduce the reported bugs, because they’re not receiving actionable feedback from the get-go, and they don’t have time to investigate all the bug reports they receive.

So what can you do to make sure everyone can contribute?

Speed up workflow for reporters

We created Marker.io to speed up and simplify your team bug reporting. Now, those 10 steps are only three:

  1. Capture and annotate screenshot of bug.
  2. Send bug reports straight to your GitLab project.
  3. Keep hunting for more bugs!

One real-life example is an issue we ran into with our pricing page a while back. During our QA process, we noticed a weird bug: the price for our Team Plan was mysteriously missing. Instead of using the lengthy process mentioned earlier in this post, we used Marker.io to quickly send feedback to our dev team and get the bug fixed in no time.

This is what reporting the issue with Marker.io looked like:

Creating the bug report issue in GitLab

Now, not only is the process much faster, but you never have to leave your website, there is nothing to configure, and all the technical data the developers need is automatically captured by Marker.io.

Create actionable reports for your developers

Once a visual feedback tool like Marker.io is introduced into the equation your developers can choose where they receive feedback, down to the specific bug-tracking GitLab project, and the important technical data they need is automatically grabbed and included in every bug report.

That means environment data, including:

  • Browser
  • Operating system (OS) and version
  • Screen size
  • Zoom level
  • Pixel ratio

Here’s an example of what a Marker.io bug report looks like in GitLab:

The bug report issue inside GitLab

This GitLab issue has all the information needed for your developers to act on it:

  • The issue is in the correct project.
  • Any pre-set epics, milestones or labels are included.
  • The issue is assigned to a team member.
  • The annotated screenshot is attached.
  • The expected and actual results are well documented.
  • The steps to reproduce are detailed.
  • The technical environment information is all there.
  • The issue has the URL where the screenshot was captured.
  • The issue has a due date.

No more wasted time following up with reporters to fill in the gaps. It’s all there, organized directly in your chosen GitLab project – complete with everything vital to fix your bugs.

Want to try for yourself? Marker.io comes with a free 15-day trial. Give it go ➡️ Marker.io/gitlab

About the guest author

Marie Hargitt is the Marketing Manager of Marker.io, a powerful tool that makes bug reporting and visual feedback easy for the whole team.

Read the whole story
karambir
450 days ago
reply
New Delhi, India
Share this story
Delete

Polishing GitLab’s UI: A new color system

1 Share

We receive a lot of feedback from our users and the broader community. After hearing that there is a perceived lack of consistency and quality in GitLab’s UI, we decided to take a look at our color palette.

Aesthetic aspects like this are a fundamental part of the UI. If we don’t get these right, everything else in the UI won’t feel, look, or behave correctly. Like a house, these aesthetics are the foundation upon which everything else is built.

Our color palette had various issues, so we started by:

Why start with colors?

There are many aesthetic aspects to a UI. So why tackle colors first? Well…

  • Colors are easy to change: it’s just a matter of changing simple values in our variables.scss file.
  • Color changes don’t affect layout: we weren’t reinventing the wheel, so these changes wouldn’t influence the layout and spacing between elements like typography can.

And, more subjectively, colors have a huge impact on the perception of a UI. It’s said that 90 percent of information entering the brain is visual and color is an attention-grabbing device.

Issues with the previous color palette

Previous color palette

It didn’t extend the brand colors

They weren’t in line with our brand colors, with the most obvious example being the pinkish-red normally associated with negative aspects like errors or irreversible actions. We already have a red from our brand, so why use a different one?

There were too many similar colors

With so many colors, it wasn’t easy to tell them apart. They were so similar that they no longer brought value to the table, just more guesswork and maintenance.

There wasn’t enough contrast

Many of our color combinations did not meet the contrast ratios defined in the Web Content Accessibility Guidelines (WCAG).

Note that some of these issues were also applicable to grayscale colors (also called “achromatic”).

Building a better palette

At GitLab, we’ve done a lot of things while standing on the shoulders of giants, aligning with our company value of boring solutions. As such, one of our initial thoughts was to use an existing color palette, something that could save us time and maybe serve as the basis for our work.

We soon found Open color, an open source color scheme optimized for UI. It has 13 hues, each with 10 levels of brightness, totaling 130 different colors. All of the values are there, it would be easy for our Frontend team to get started by importing it as a dependency. This was starting to look very promising and we were getting excited about this quick start.

However, the more we thought about our current needs and goals, the more we realized that this approach wasn’t going to work for us. Existing color palettes usually had too many colors for our needs and the ones we did need, would have to be tweaked to align with our brand colors. All of the upsides of using an existing color palette were now irrelevant.

We went back to the drawing board, starting with defining the goals we wanted our new color palette to achieve:

  • Align with and extend our brand colors
  • Have only the hues that we need, the colors that have meaning in the UI
  • Be accessible by passing the WCAG

1. Extending the brand

The first step in creating our new color palette was inspired by “Add Colors To Your Palette With Color Mixing,” where we used ColorSchemer Studio to generate this color wheel from the three brand colors and the primary purple used on this site:

Color wheel generated from the brand colors

Initial colors were separated by even intervals of hue and manually tweaked. In the image above, the matching brand colors are next to the wheel for reference.

2. Cutting the rainbow

Then, we generated tints and shades for some of the hues in that color wheel: green, blue, purple, red and orange.

Tints and shades

These were first obtained from the Material Design Palette Generator and then tweaked manually using Colorizer and Eric Meyer’s Color Blender. The dark orange colors are a good example of manual tweaking as they initially looked very “muddy.”

It’s important to consider the number of tints and shades that you need, as that affects the flexibility when applying those colors. Our guiding principle here was to provide clear and visible contrast between each step of the scale. If we had steps that were too similar, the difference wouldn’t be noticeable, which meant that there was no value in having those colors.

We didn’t want all of the colors of the rainbow, just the ones that carry meaning effectively. We want to be able to communicate states and actions by applying colors to elements in the UI (e.g. informational elements are associated with blue). If you have too many similar colors in a UI, like green and lime, you’re expecting too much not only of your users but also of your team. On the one hand, most of your users won’t notice the difference between colors when placed in a complex UI, so they also won’t pick up the different meanings. On the other hand, your team will have more work learning, working with, and maintaining unnecessary colors.

Additionally, we shouldn’t rely on color alone to communicate something, so that’s also another point for not having too many similar colors. This is actually one of the success criteria of the WCAG about the use of color:

Color is not used as the only visual means of conveying information, indicating an action, prompting a response, or distinguishing a visual element.

3. Colors for everyone

Using a small set of colors which allows for better memorization and recognition is already a good step towards a more usable product, but it’s not enough.

Evaluating, testing, and prioritizing accessibility problems is one of our main initiatives here at GitLab. Establishing contrast between text and background is one of the key aspects of accessibility and, as we saw before, our previous color palette didn’t meet the WCAG contrast ratios. So, as we were defining our new color palette, we continually tested the colors using the WebAIM Color Contrast Checker.

Along the way, we hit a problem: combinations of white text over green or orange backgrounds did not pass WCAG level AA for small text. This was an issue because we wanted to keep a uniform “vibrancy” and “pop” throughout all colors. While the colors looked uniform to our human eye, the WCAG test didn’t “see” them as we did. Would we be forced to “break” this visual consistency and use darker shades for those colors? Not only that, but this would render them too dark to carry meaning effectively. In the following example, the “success” meaning of green or the “warning” meaning of orange become less immediate as their contrast increases.

Warning and success elements can be more or less noticeable but that affects the result of the WCAG contrast tests

We found an interesting take on this at the Google Design website, which intentionally uses colors that at least pass AA for large text:

Due to this site’s purpose being a source for visual design reference and inspiration, we felt it was acceptable not to target a stronger color contrast level. — Behind the Code — Google Slash Design Accessibility

Considering our audience and user base, should we be rigid and enforce AA level for small text? As a first step towards better color contrasts, we decided to set our minimum at AA for large text, even for small text. For grays, we tested and tweaked their contrast against light gray backgrounds, as that is a common color used to differentiate regions in the UI.

All tints and shades with corresponding WCAG levels, including grays

Color priorities

So, after all this work, we introduced a wide range of color tints and shades with the new color palette. The problem was that there was no guidance for using them. Some color decisions are fairly quick and intuitive, but we wanted to standardize and make the color selection process as objective as possible for everyone, even developers. We want to give people the chance to make a decision without imposing approval or reviews by the UX team. We want to be lean, efficient, and focus on results.

Some questions that we should be able to answer:

  • “I need to use one blue, which shade should I pick?”
  • “This UI component needs three contrasting shades of green. Can I pick whichever I want?”

The Material Design colors have been a great source of inspiration for us. They follow the numeric naming conventions used by the CSS font-weight property, where a higher value equals a higher degree of blackness. So, we’ve named our colors from the lightest (50) to the darkest (950).

On top of this naming scheme, we’ve defined a system of color priorities. This is similar to how different font weights are used to create contrasting typography that communicates hierarchy.

We can apply this same logic to colors, as seen in the image below, by tagging them according to their priority: from 1 to 4. If you need guidance, the priorities can help you make better choices. When choosing how to apply color to a UI component:

  • You start at priority 1, which is the medium weight 500. There’s only one shade with priority 1 per color (the “default” shade).
  • For more shades of the same color, you could then choose from the next priority level, number 2, which can either be 300 (lighter) or 700 (darker). And so forth for even lighter or darker shades.

All tints and shades with corresponding priorities, names, and WCAG levels, including grays

What’s next

Along the way, we’ve learned that mixing colors and defining color palettes is not only science, nor only art, it’s a subjective balance on the human mind. Color harmony depends on many factors, like culture, age, social status, or even the designer’s intent.

We’ll have to see how people use the 11 tints and shades and how they’re applied in our Design System. This is a constant evolution, and we’re always iterating (as we should be).

Next, we’re going to review our color meaning guidelines and be more active in their usage, not only in the product but also in our Design System and pattern library.

A new color palette and a color priority system are seemingly small steps towards a better user experience throughout GitLab, but they do make a big difference, for our users, our team, and every contributor. This is the first initiative to polish our UI styles, next we’re implementing our new type scale – which will deserve a dedicated blog post.

If you have any questions, feel free to post a comment below, tweet at us, or join the discussion on the following issues:


Cover image by David Clode on Unsplash.

Read the whole story
karambir
736 days ago
reply
New Delhi, India
Share this story
Delete
Next Page of Stories