Why use the command line Archives - Powercmd https://www.powercmd.com Command lines in programming Fri, 06 Jan 2023 13:14:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://www.powercmd.com/wp-content/uploads/cropped-logo-32x32.jpg Why use the command line Archives - Powercmd https://www.powercmd.com 32 32 A Beginner’s Guide to Smart Contracts: Learn What They Are and How to Use Them https://www.powercmd.com/a-beginners-guide-to-smart-contracts-learn-what-they-are-and-how-to-use-them/ Fri, 06 Jan 2023 13:13:46 +0000 https://www.powercmd.com/?p=178 Smart contracts are becoming an increasingly important part of the digital world, enabling faster, more secure transactions and exchanges with […]

The post A Beginner’s Guide to Smart Contracts: Learn What They Are and How to Use Them appeared first on Powercmd.

]]>
Smart contracts are becoming an increasingly important part of the digital world, enabling faster, more secure transactions and exchanges with little to no human involvement. But what exactly are they and how do they work? This beginner’s guide to smart contracts will provide you with a comprehensive overview of this revolutionary technology, outlining the basics of what they are, their advantages, and how you can use them. We’ll explain the basics of coding and programming that are required to create and use smart contracts, and provide you with some helpful resources to get you started. With this guide, you will have all the information you need to start understanding and leveraging smart contracts for your business or organization.

What are Smart Contracts?

A smart contract is a computer code that facilitates, verifies, or enforces a contract or a business rule. The code can be written in any programming language. Smart contracts can be used in a variety of ways, but most commonly they are used to replace the need for third-party verification and mediation in a contract. This is an attractive feature because it creates trust between parties without the need for a middleman. Contracts are agreements between parties that specify what each party is responsible for, when they must be completed, and what the consequences are if one or both parties don’t follow through on their responsibilities. When you sign a contract, you are agreeing to these terms, and you promise to fulfill your responsibilities as outlined in the contract. The problem is that when there is a dispute, it can be extremely difficult to prove that one party fully upheld their end of the agreement. With smart contracts, though, you can program the terms of your contract into a computer and arrange for a digital “oracle” to verify whether both parties have upheld their responsibilities. The oracle is the part of the smart contract that monitors and verifies the terms of the contract. If the terms of the contract are not met, the oracle will not verify the contract, and the funds will not be released.

Advantages of Smart Contracts

Fast, secure transactions – Smart contracts are incredibly fast and secure, with no human intervention needed once the conditions of the contract have been met. This allows for both parties to rely on the terms of the contract being fulfilled as quickly as possible, at any time of day or night. Cheaper, more efficient operations – Since smart contracts remove the need for third-party verification, the cost of fulfilling business operations is reduced significantly. This allows businesses to offer their products and services at lower costs, which can help them expand and scale more easily. Trust – Since smart contracts are fully digital, you don’t need to trust the other party to uphold their end of the bargain. The conditions of the contract are programmed into the code, and the terms of the contract can be verified and authenticated by an independent computer program.

Understanding the Basics of Coding and Programming

In order to write and use smart contracts, you will need to understand the basics of coding and programming. While there are many languages that can be used to write smart contracts, this guide assumes familiarity with either C++ or JavaScript. If you are not familiar with either of these languages, you can easily find tutorials online. Many online learning platforms also offer courses on coding and programming, and you can find many free tutorials and guides on how to learn these skills. Programming is the process of creating a set of instructions for a computer to follow. While some programming languages are designed for specific purposes, like creating websites, others are designed for general use, like creating smart contracts. The first step to creating a smart contract is to set up a programming environment, or a place to write and execute the code. The most common programming environment is a virtual private network (VPN) with a virtual machine (VM).

How to Use Smart Contracts

Once you’ve written your smart contract, you will need to run it on a blockchain. A blockchain is a decentralized, distributed ledger that is used to record transactions and store data. The most common blockchain is the blockchain that is used for Bitcoin. Other blockchains can be used for Ethereum, NEO, Stellar, and Siacoin. Your smart contract will need to be compatible with the blockchain you select, so make sure you choose one that is appropriate for your needs. You can also use a hybrid blockchain that is a combination of a private blockchain and a public blockchain. You can use a public blockchain like Ethereum to build your smart contract and then take your smart contract off-chain by moving it to a private blockchain using a “relay” or “bridge”. This is useful if you don’t want to incur the cost of having your smart contract on the public blockchain.

Programming Languages for Smart Contracts

Before you start writing your smart contract, it’s important to understand the different programming languages that are commonly used in smart contract development. In order to find the right language for your specific needs, you will want to consider the following factors: – What industry you’re in: Different industries require different smart contract languages, due to the specific needs of each industry. For example, the healthcare industry requires a high level of security and encryption, which is not a requirement for the retail industry. – The level of scalability you need: Are you planning on expanding your business to other countries, or are you just planning on using your smart contract within the U.S.? This will help you determine which programming language is best: some languages are better for expanding globally. – How readable your code is: You want your code to be as easy to read as possible, but it also needs to have the correct functionality. If your code is too simple, it may not work as intended; if it is too complicated, it may become more susceptible to bugs and errors.

Different Types of Smart Contracts

Now that you understand what a smart contract is, how they work, and the basics of coding and programming, you can start to explore the different types of smart contracts and how they can be used. Here are some of the most common types of smart contracts: – Asset-backed smart contracts – In these contracts, one party transfers an asset like real estate or gold to another party in exchange for a certain amount of money. They are often used in lending, when banks transfer real estate as collateral. – Tokenization – This is when a traditional asset is tokenized and put onto a blockchain. It is most often used in real estate, when buyers and sellers use a smart contract to represent a percentage of a property, not the property itself. – Insurance contracts – These are similar to asset-backed smart contracts, but they are used to represent insurance policies. – Human resource management – These are used to record and verify employee information.

Ethereum and Its Smart Contract Platform

Ethereum is a decentralized computer network that uses smart contracts. It was designed to be used for anything, not just financial transactions, which is why it is known as a “general-purpose blockchain”. Ethereum uses smart contracts written in the Solidity programming language. The Ethereum blockchain uses a proof-of-work (PoW) consensus algorithm, which is a decentralized mining system where “miners” use high-powered computers to solve complex mathematical problems. The first person (or “group”) to solve the mathematical problem receives a reward, which is currently around $3,000 USD. As more people use and mine Ethereum, the difficulty of the mathematical problem increases to match demand. Ethereum is currently the most commonly used and widely accepted blockchain platform for smart contracts.

Resources to Get Started with Smart Contracts

Programming is a skill that takes time to develop and perfect. While you can find the basics of coding and programming online, it is best to find a local programming community, where you can work with others to practice and hone your skills. If you are looking for a place to start, we recommend visiting Meetup.com, where you can find communities centered around programming and a variety of other skills. Some great resources for learning more about smart contracts include: – Cryptoeconomics – This is one of the most comprehensive resources on cryptoeconomics and blockchain technology. – Cornell University’s SCARAB (Semantic Contract Analysis and Representation) – This website is dedicated to the analysis and representation of semantic contracts. – Cryptoeconomics – This is one of the most comprehensive resources on cryptoeconomics and blockchain technology.

Examples of Smart Contracts

– Real Estate Trading – Smart contracts can be used for a variety of real estate trading situations,

The post A Beginner’s Guide to Smart Contracts: Learn What They Are and How to Use Them appeared first on Powercmd.

]]>
Whys and Hows of Using the Command Line for SEO https://www.powercmd.com/command-line-seo/ Fri, 24 Jun 2022 12:45:30 +0000 https://www.powercmd.com/?p=142 A command-line interface is a text-based UI you can use for a multitude of tasks. So, why is SEO in […]

The post Whys and Hows of Using the Command Line for SEO appeared first on Powercmd.

]]>
A command-line interface is a text-based UI you can use for a multitude of tasks. So, why is SEO in the title?

Coding of any kind can benefit you as a professional. As you master the basics, you will be able to access files and give commands much faster. And speed is often a priority for businesses.

It’s logical. The faster you optimize a project, the sooner it will show results.

Today, we’re talking about using the command line for SEO.

How the Command Line Helps SEO

The command line is the ancestor of the graphical user interface. By utilizing it now, you can positively impact your SEO career.

When you use CLI (command-line interface), you can:

  • Search for certain parts of a large report
  • Access files
  • Split data into smaller pieces that are more convenient 
  • Verify status codes, and more.

You can do all of this manually, so why learn a new skill?

Graphics take time to load, and you need time to find the needed function and click on it. Processes go so much faster when you navigate through CLI. The underlying code is always faster to manipulate.

Besides, if the site resides on a remote server, CLI is the only way to navigate and work with it. When it comes to scripts, it is pretty much the same story.

Of course, there is a quick and easy solution like the technical website audit program by SE Ranking that will check the status codes of all website pages automatically. It can check the status codes of all pages, images, and other elements and get a report with other technical issues and recommendations in a few minutes. Sure, there are tools for other functions as well. However, learning to ‘talk’ to the code of your device will speed up the other process, which enables you to work on the status codes as a bonus.

Commands an SEO Specialist Should Know & How to Use Them

First of all, you need to know how to access the command line. The starting point depends on the operating system you use:

  • If it’s Windows, you can search ‘cmd’ in the navigation
  • If it’s Mac, you can find the command line in the ‘Utilities’ section of the ‘Applications’ menu

Keep in mind that the commands differ between Mac, Linux, and Windows since the former two are UNIX-based. They are also more sophisticated in terms of command lines, so we’ll talk mostly about them.

Tip: If you’re working with Windows, use a Subsystem for Linux, and you’ll be able to access the interface.

Here are some commands an SEO specialist should know when working with CLI:

  • For navigation.
    First of all, you can easily find your working directory by using the pwd (print working directory) command. It’s best to create a home directory (it will be highlighted with a tilde ~) and place all the files and other folders in it.

    The cd command means ‘change directory’. It will bring you where you have to be. If you need to go deeper and access another folder within the one you change to, just write a clear path to it – cd directory_name/subdirectory_name. You can easily come back to the home directory by using the cd command with a hyphen – or a tilde ~,

    You can also list files here using the ls command.

    For the type of file, use the file commander.
  • For making and editing.
    To create a directory, use mkdir name_directory. You can make one directory or multiple ones, as well as parent and child directories, just use either comas , or slashes /.

    To move a file or a folder, use the mv command and the name of the object.

    To remove, use rm and the name.

    To create an empty file, use touch and the name.

    To copy an object, use cp, the old name, and the new name. To copy a file to another folder, instead of the new name insert the name of the directory.
  • For manipulation.
    For large files that take an eternity to load using the graphic UI, CLI is a true savior.
    To see the beginning or the end of the file, use the head or tail commands with the name of the file. You’ll see the first or last 10 rows, but you can change the number by adding -n and a number.
    To print the file, use cat (concatenate) and the name of the file.

    To combine several files into one – cat file names > combined file name. If you need to combine two files but not create another one, use cat file1 >> file2. For example, if you research links, you’ll have several exported files you can combine.

    For word count – wc -w and the name of the file. For character count – wc -m and file name.

    To look for certain characters in the file, use the grep command – grep “word(s)” file_name.

    And if you need to utilize several commands at once, use the pipe symbol | in-between them.
  • For accessing the Web.
    The most important command here is curl. It allows you to transfer or download information to or from the server. It’s perfect for checking status codes and other tasks.

Tasks an SEO Specialist Can Do Using the Command Line

Using the commands above, and more sophisticated ones, you can firstly navigate directories. It’s easy and quick to find yourself in the folder of your choice. Then, you will be able to create directories and files, move and remove them, copy, cut, paste, find the exact word or number you’re looking for, and much more.

Furthermore, you can access the Net and ‘speak’ directly to the server of the client’s website.

As a result, you can perform many complex SEO tasks by simply talking to your device. In combination with dedicated SEO tools like SE Ranking, you’ll quickly get in-depth information about the performance of the website, backlinks, competitors, etc.

Summary

The command line can be your best friend in all things SEO. Learn the most basic commands first and then transition to using code more rather than graphics. This allows for speed and accuracy, making you a much better professional.

The post Whys and Hows of Using the Command Line for SEO appeared first on Powercmd.

]]>
Why the command line is needed on modern computers https://www.powercmd.com/command-line-on-modern-computers/ Thu, 12 May 2022 08:37:12 +0000 https://www.powercmd.com/?p=84 Almost all modern programs and operating systems are designed for either finger or mouse control.

The post Why the command line is needed on modern computers appeared first on Powercmd.

]]>
We’re figuring it out on Macs and Windows.

Almost all modern programs and operating systems are designed for either finger or mouse control. And that’s a very good thing if you’re opening up your computer for the first time and don’t know what’s there yet. All the icons are on the screen, you can select the right command, everything is clear.

But you will notice that experienced users hardly ever touch the mouse – they do most of the work with the keyboard and it is much faster. This is due to the fact that pressing keys is much faster, especially if you already have muscle memory.

Programs have hotkeys to speed things up. And the operating system has a command line – it’s like hotkeys, only for the whole computer.

What the command line can do
The command line can do everything the operating system can do, and more:

copy and move files, rename them, and create new folders;
format disks, mount and unmount disks;
run applications and programs without an interface, give them tasks, get results;
change system settings, work with the network;
automate all this to a certain extent; and much more.
To put it more accurately: the command line can do everything that an operating system can do, and more.

How to invoke the command line
The command line is built into every Windows or MacOS computer. The program that gives you access to the command line is called a terminal.

If you have Windows, you need to press the Win+R key combination, type cmd in the window that appears and press Enter. In MacOS, press Cmd+space, type terminal, and then press Enter (this is the default setting and can be changed).

You will see a window where you can enter commands for the computer.

How it works
The command line works like this: you write commands to the computer and it executes them. There are internal and external commands.

Internal commands are those that are already built into the operating system. They can be used to control the computer within the basic capabilities of the operating system.

External commands are all the programs that the user puts on the computer himself. It often happens that when a program is installed, it adds the auxiliary programs it needs to the system – and they too become external commands. For example:

  • you put a VS Code program on your computer to program in Python;
  • then you can type code at the command line, press the enter key and the program will run;
  • this means that immediately after installing this program, the computer has a new command, code.

Why a web developer needs a command line
Because most frameworks are installed and managed from the command line. Angular, on the other hand, allows you to create templates and application templates through the command line.

If you’re a web developer, the command line will come in handy:

  • to install all the server tools, such as PHP, Apache and MySQL;
  • API and queries;
  • setting up the server environment and access control;
  • for working with repositories and creating backups of projects;
  • for testing the server under load;
  • getting logs for server analysis.

The post Why the command line is needed on modern computers appeared first on Powercmd.

]]>
Examples of command line usage https://www.powercmd.com/examples-of-command-line-usage/ Thu, 21 Apr 2022 08:37:00 +0000 https://www.powercmd.com/?p=87 You can use the command line to do many different things, from managing a server to searching for files.

The post Examples of command line usage appeared first on Powercmd.

]]>
You can use the command line to do many different things, from managing a server to searching for files. But all the power shows up when we need to do a lot of the same type of operations.

Let’s imagine the following situation: we downloaded 30 podcasts that we want to listen to on the road. But after downloading, we found out that the volume of all records is very low, and even if you turn everything to maximum, it is still not enough. To listen to the podcasts, we have to:

  • start the audio editor,
  • open each file in turn,
  • manually set the volume to the right level,
  • save the file,
  • open the next one and do the same thing again,
  • repeat 28 more times.

Obviously, this will take a lot of time, and it’s easier to download other podcasts than to spend so much effort on these ones. But in MacOS, for example, we can open a terminal and write two commands there:

cd podcasts
for file in *; do wc -l $file; lame –scale 8 $file; done

The first command goes to the directory with the podcasts and the second

  • takes all the files in that folder;
  • gets their number and name;
  • in the loop starts the program lame and specifies parameters for it – to raise the volume by 8 times for this file;
  • repeats the cycle until all the files are processed.

As a result, we will get the same files in the same folder, but with increased volume. In terms of time, it will be much faster than doing everything manually. But to do that you need to know the features of the command line, know how to work with it, know the commands and their parameters.

Here’s what else you can do through the command line:

  • Monitor processor load;
  • set up program auto-updates;
  • make scheduled backups;
  • generate texts using neuronics and immediately publish the results in your Telegram feed;
  • collect emails from all mailboxes, filter only important emails, collect them in one, arrange it nicely and print it out on a printer;
  • and anything else, if there is a command or parameter to call for it.

The post Examples of command line usage appeared first on Powercmd.

]]>
Why be able to work on the command line? https://www.powercmd.com/work-on-the-command-line/ Thu, 17 Feb 2022 08:40:00 +0000 https://www.powercmd.com/?p=90 Today we are going to talk about why to learn the GNU/Linux operating system, the advantages of working at the command line, and how this all relates to the Unix philosophy.

The post Why be able to work on the command line? appeared first on Powercmd.

]]>
Today we are going to talk about why to learn the GNU/Linux operating system, the advantages of working at the command line, and how this all relates to the Unix philosophy.

In the world of desktop operating systems, like Windows, the concept of a program is highly specialized. If you want a program to watch a video, you go and download a particular program that was specifically written for that purpose. If you want to listen to music, there is also a whole range of programs that do this. In general, for any typical task, a huge number of programs have already been written and distributed under different conditions for different purposes.

As a rule, these programs weigh quite a lot and provide additional integrations and functions that you may not need at all. Sometimes it comes to the point of absurdity when you have to download and install gigabyte packages just to be able to edit text. For example, in order to show you advertisements from time to time, the developers integrate the code of full-fledged browsers like Chrome into the programs. Or, if you want to play a video inside an application, instead of integrating their code with the standard programs already installed on your system, they embed a full fledged video player inside their product, with a lot of unnecessary code that will for no reason slow down your system and eat up precious memory.

The problem with this approach starts to emerge when you need to solve a fairly specific task, and you can’t find a ready-made program that implements the desired logic.

“How am I going to run Photoshop on this Linux of yours? I’ve got work to do, not nonsense to do.”

Don’t get me wrong, this approach works quite well when it comes to professional packages: engineering, creative packages like AutoCAD, MATLAB, Adobe Photoshop, Blender and many others. These large and rather bloated software solutions allow millions of people around the world to learn useful professions and create amazing products, adopting best practices and standardizing workflow.

At the other extreme, we would have to master programming just to implement the necessary functionality every time. And this would turn into a mad waste of time for a huge number of people, forcing them to dive into areas which are not core to them.

Between these two extremes there is a certain intermediate level, which allows you to solve non-standard tasks, programs to solve which are clearly not provided. But nevertheless to use for this some relatively large ready-made building blocks. We have already got rid of the need to write program code, but we still have a fairly flexible system of convenient utilities, which, when combined into chains, can create the logic we need.

These building blocks are simple programs, utilities which may be quite useless on their own but are originally designed to be used in conjunction with others. The GNU/Linux operating system has about a hundred of these basic utilities, available out of the box in any typical distribution. Thousands more are usually in the standard repositories and can be installed by a single command. Many of these utilities can be run in different modes, using a standardized flag system. With this flexibility, we have a truly limitless field of experimentation, where only your imagination can limit your possibilities.

And the glue for these utilities is simple text, which they take to the standard input, modify as part of their work and pass on to the next utilities. You get a kind of data pipeline. This simple, human-readable text is the universal communication interface.

If Unix was developed today, I think it would not be very popular. For example, they would impose an object system, as Microsoft did with PowerShell, or they would impose some serialized data format, such as JSON or YAML. Certainly they would have used encryption, certificates to sign the data packets and the protocol would have been binary and unreadable without the use of special utilities. And working with the file system and with the kernel from the user space would be like working with a database.

But fortunately for us, all these traditions of overcomplication came much later, and the founding fathers of Unix were not too bad at it. And, of course, they did it wisely.

“I’m learning Python, it’s the future, but your bash is a tinhorn.”

Maybe such “make-it-and-break-it” conveyor chains will not claim to be a complete and long-term solution, which will be beautifully designed and optimally implemented, but this is all compensated by the ease of use, the ability to quickly sketch a rough draft and play with data using only the standard means of the operating system. And having found the right approach, you can write a script in Python, if you have such a desire.

This experience will be especially invaluable for engineers, who often have to work in very cramped conditions, so they need to use every opportunity to automate and simplify their work.

“Linux needs to be fine-tuned with a file to make it look decent. Here are my configs.”

If you’re a programmer, your world is usually limited to your one personal computer, where you can install any programs you want to make your work more efficient. So you will probably have dozens of additional utilities installed on your machine, and the standard Bash will be replaced by the advanced Zsh. The configuration files of many programs will be customized, and a lot of “aliases” for the scripts you run will create that comfortable language which only you will understand. You can even use some exotic GUI, like the i3 tile manager, with its own settings and hotkeys.

By taking this process to the limit, your virtual workplace can resemble the cockpit of a spacecraft from a sci-fi movie in terms of sophistication and technology.

“And that’s right, I don’t work with less than three screens. And only with a mechanical keyboard.”

However, what about those professionals who build and administer complex systems consisting of thousands of servers? They simply don’t have the ability to install handy utilities on all the machines and tweak everything to their liking. Often there is even no way to physically install anything, because access to the Internet on a production server can be blocked altogether. And some virtual machines will be so old or so important that you will be afraid to even breathe on them, lest you accidentally break something.

You have to make do with what you have at hand. And here the ability to work in a standard environment, the command line, multiplied by the enormous potential that is embedded in the operating system itself, will be the moment of truth, which distinguishes a good specialist from a mediocre one. Knowledge of hi-tech new trends, modern hacks extending the functionality of standard utilities, and supposedly making them more convenient, will not help you in any way. All of this knowledge will be completely useless in real life and will only distract you. Also, knowledge about the graphical desktops and how they differ from each other will be completely useless.

I repeat once again, the graphical desktop interface or cell phone interface is not worse or better than the command line interface. You just have to understand the strengths and weaknesses of both approaches and use each where it is more appropriate.

“And I work from Remote Desktop to my work machine and write code. It’s all working swiftly.”

Imagine working from home, connecting to your office computer through an encrypted VPN gateway. The company has decided that you can’t connect directly to the production system, but only from your office computer’s subnet. Therefore, we connect to this computer in order to reconnect to a remote server on the Internet. But this server is just a monitoring server, sometimes called Bastion Host, which has one network interface connected to the worldwide network and the other to the internal protected network where all the most important servers are located. Therefore, through it we have to reconnect directly to the server with which we originally wanted to work.

Agree that the graphical interface simply can not work adequately in these conditions. It is possible to send the image over such distances, through so many intermediate nodes, but it will be very inconvenient. The resulting picture will not be updated in real time, creating at best a constant delay, at worst constant connection interruptions. And the latency is sometimes such that it is simply impossible to work.

However, even if the bandwidth between your computer and the destination server is only a few kilobytes per second, which is often the case, connecting to the virtual terminal via SSH can still be quite comfortable. Using the command line interface, you can make any change, check and fix any problem, test an application, check the system for security, check the system logs, and much more. All you need to know is which commands to run and in what order. And the system will responsively do everything you ask of it, without unnecessary questions. It’s as if you’re not working on a remote server on the other side of the world, but on your home station. This is because transmitting a short text command is many times easier than a continuous video stream.

“It’s kind of weird to stare at a black screen in the 21st century, a graphical interface is better.”

Also, unlike graphical interfaces, with a command line interface the information you’re working with is always unambiguous and focused. You always see the text exactly where you expect it to be, not a smeared palette of colors, between which there must be letters scattered all over the screen. Clearly, with some effort, you can create well-designed and high-quality graphical interfaces, but it’s a long way from standardization. Each program often strives to provide its own unique user experience that sets it apart from its competitors. But these visual components do not have to match the general gamut of the applications running on your computer.

You can scream about being old-fashioned, outdated and useless, but the fact is that in the IT world, working in the terminal is the main and often only possible way of working with complex information systems.

And, of course, no one forbids you to work with a command line interface and use the same graphical interface, opening the necessary sessions in the many tabs of the virtual terminal, and at the same time keep open dozens of pages with the necessary documentation in your browser.

“Windows is enough for me. I’ve been working with it for many years and I’m fine with it. And for gaming I have a PlayStation.”

Clearly, if you use your computer for entertainment, you probably use an operating system like Microsoft Windows or MacOS and are quite happy with it. If your profession simply leaves you no other choice, you will use the software that your colleagues use and have no problems at all.

However, let’s take a broader view. What if you use your computer for tasks for which there is no single and generally accepted toolkit? Imagine hundreds of servers scattered around the globe, dozens of complex systems like Kubernetes, relational and document-oriented databases, message brokers, and various supporting infrastructure. In complex microservice architectures consisting of hundreds of interacting services, every day we have to deal with a lot of abstractions and conventions, which are not easy to remember, let alone work with them. Programmers, systems administrators, DevOps engineers, network security specialists, data analysts – all these professions require different tools, different skills and different approaches. And among the same programmers, there are those who develop application software, others who develop websites or mobile applications. There are those who specialize more in frontend or backend. But we shouldn’t forget about those who work close to the hardware and write device drivers or develop embedded systems. And these are just some of the possible directions.

You simply will not write graphical applications for all the variety of tasks that arise on a daily basis for the people who create and operate such systems. On the contrary, these needs are often naturally covered by the interaction of the simple utilities that are included in the basic build of any GNU/Linux distribution.

Yes, perhaps not as nice, not as presentable, but, from an engineer’s point of view, the command line completely covers the needs and allows you to avoid distractions and focus directly on the task at hand.

Therefore, the very posing of the question “what is better” makes no sense. For the tasks that our profession in the previously mentioned specialties sets us, the command line is the best possible interface, which due to its simplicity can be used in any application even for those tasks for which it was not created.

“Who uses this operating system anyway. Mac is better.”

Ordinary people tend to exaggerate the importance of operating systems from Microsoft and Apple, considering them as a kind of global standard, because in their daily activities they have no opportunity to make sure that there could be anything else. Yes, today the operating systems based on the Linux kernel cannot take a worthy place among the desktop systems, although in fact they are not inferior to them in any way. However, let us be honest and do not forget to mention that for some reason the same corporations have completely lost out on the market of server solutions. And today, the GNU/Linux operating system and partly the BSD family systems completely dominate this segment.

They are used today not only in classical servers located in data centers all over the world, but also in cell phones, routers, supercomputers, all kinds of household appliances, cars, video cameras, and many other applications. And I would not be surprised if even in spacecraft and lunar modules, where there are no special hardware requirements, everything works about the same as on your home computer.

All those social networks and messengers, as well as the numerous Internet resources without which we cannot imagine our lives today, overwhelmingly likely use Unix-like operating systems for their functioning. And in all these and many other applications, your knowledge will be up to date and not obsolete for a very long time to come.

“Wait, what about .Net? That’s a very popular technology.”

And yes, I know that .Net and some other solutions are actively used on the backend of many reputable companies. However, this is more of a deviation from the norm, and global trends rather suggest otherwise. They are not able to shake the dominance of platforms such as Java, and the same Java feels great on servers with GNU/Linux and OpenJDK on board. Even if the developers themselves use a Microsoft system in their work and have never seen Linux.

It does not matter which framework or programming language is more modern, faster or more architecturally successful, as long as large corporations still prefer Java for business reasons.

The very existence of systems such as Mono, which makes development in .Net cross-platform, and WSL, which allows you to use Linux inside Windows, speaks to how precarious Windows’ position in this market segment is today.

“Am I even a writer and why would I need to dive into all this?”

Even if you are not an engineer and it does not resonate directly with your work, the GNU/Linux operating system has many interesting and useful tools that have not only specific “IT” applications, but are also part of the world’s heritage. Such tools include, for example, the Git version control system and the TeX typesetting system.

I want to stress once again that it is not necessary to take the so-called Unix philosophy as the ultimate truth. It is merely an abstraction, an idea which in some cases may be convenient, in others not sufficient. However, the fact that it has been in active use for so many years and that no real alternatives have been devised during this time makes it clear that it has stood the test of time and is not about to go anywhere.

“Linux is completely obsolete, and everything has to be rewritten. All the code in there is from the seventies, the nineties at most.”

However, amongst the variety of opinions, you can occasionally see that this whole approach is obsolete and needs to be completely overhauled. All these operating system utilities, such as the text editor vim, the shell bash, the systemd initialization system and many others (underline that) are used only because they are still preinstalled in most distributions, and only therefore still popular. If it were not for this deterrent to progress, leaving no chance for the younger generation in their quest to “make the world a better place”, we would have been using handy tools long ago instead of being held hostage by this historical junk.

Unfortunately, this approach can often be found in various discussions, which means that such a belief is firmly planted in the minds of many people. There is nothing easier than to tear some statement out of its historical context and elevate it to an absolute. Well, let me continue this line of reasoning by saying that the Linux kernel itself is completely outdated, if only because it is monolithic. Let us rewrite it too and switch to a more “advanced” micro-kernel architecture, as we did, for instance, with the Minix operating system.

Let us also not forget that the processor architecture we use today is also completely obsolete, as it pulls decades of backward compatibility with technologies that no longer exist. Let us urgently create a new architecture that will be devoid of all this legacy, and use it exclusively. Then let’s train millions of specialists to be able to implement it in all production and server facilities around the world. And we won’t forget to completely rebuild the factories for the new technological process, because we need to replace the entire global server fleet in the shortest possible time.

All this will require, at best, hundreds of billions of dollars and years of international effort. But what is that compared to the opinion of some experts, really?

Of course, new operating systems, programming languages, tools and libraries must be developed. Progress should by no means be stopped. New and progressive trends, however weak they may have been when they emerged, have every chance to strengthen and replace obsolete approaches and practices. However, it would be very strange to expect that what is being used now will completely disappear from history without any trace. Most likely, elements of the ideas and software code we use today will remain, for future generations, multiple vestiges and anachronisms with which they will be forced to coexist. Including those technologies that we consider advanced and most successful at this stage, but which will undoubtedly become obsolete in the new historical realities.

So while someone waits and suffers, we will study and use what we have and understand that behind all these technologies are decades of hard evolutionary process consisting of the labor and efforts of millions of people. Among them are both good and bad professionals making good and bad decisions. These people may work in cramped conditions according to a general corporate plan over which they have no control, or free artists working in their free time on what they are really interested in.

So let us not forget the principle of historicism and base our careers and lives on this understanding. And let us also remember that if we can achieve anything, it is only because we stand, without a doubt, on the shoulders of giants.

The post Why be able to work on the command line? appeared first on Powercmd.

]]>
Why the command line in the 21st century https://www.powercmd.com/command-line-in-the-21st-century/ Thu, 20 Jan 2022 08:52:00 +0000 https://www.powercmd.com/?p=93 After we got acquainted with the command line, the question may arise why we need this very command line of the 21st century

The post Why the command line in the 21st century appeared first on Powercmd.

]]>
After we got acquainted with the command line, the question may arise why we need this very command line of the 21st century, when there are such powerful graphical interface systems. Window graphical interface of Windows and other operating systems.

Why do we need to control your computer via a text line? What is the point here?

Let’s try to get to the bottom of this, and in this video, I wanted to focus on a few reasons, which, in my opinion, is the main, in order to use this type of interface.

In my opinion, using a command line interface is about stability and reliability.

The thing is, when you and I use some sort of graphical interface, we have a lot of different processes running on our computer that may have very little to do with anything of direct relevance.

For example, we may be running some graphics processors, some window management systems, some mouse control systems for some other devices, in general, a whole huge series of different additional utilities, which may have absolutely nothing to do with business.

As you understand, the more programs we have to deal with, the more likely it is that our system simply will not work.

Any program can fail. Any program can freeze. Any program can do something wrong and you get an error. With the command line, you have a minimum number of programs running, you only have what you need.

To make the command line break, you have to try very hard.

About the windows mode situation, you have probably seen many times the white, blue and black screen of death, when you get big errors, when something is broken and so on. In the case of the command line, this situation is almost unheard of.

We get such a stable and reliable operation, which helps a lot when you’re trying to solve a problem.

The next reason why I think it’s worth using the command line is the low consumption of resources.

In order to work with text, as you understand, you don’t need a lot of resources at all. If you’re working with some remote computers, the low resource consumption helps a lot.

First of all, you get speed. The data is downloaded almost instantly and you work as if you were working on your own computer.

So there is practically no difference. If it were a graphical interface, it would take a lot of resources to transfer images, pictures. I.e. there would be some data transfer delays for control.

If you work only with the command line, you have no extra utilities, it is less space on the hard disk. Accordingly, you need to pay less money to the ISP.

You can join and work with the command line from almost any device. You can even do it from your phone, install the appropriate program, connect to your computer, and everything will work almost instantaneously.

Working with the command line is reliable and fast.

The post Why the command line in the 21st century appeared first on Powercmd.

]]>