The Best Binary Option Trading Platforms and Brokers of 2020
Binary Options Robot 2020 - Best Auto Trading Software
Binary Option Robot Get Your Free Auto Trading Software
Free Secret Binary Software and Strategy Binary Today
Binary Options Review Panther
Welcome to the Binary Options Review Panther Reddit! Our passion is Binary Option trading and Binary Options. We strive to tell people the TRUTH about Binary Options. We write Binary Option scam reviews for the latest Binary Options to warn people about the many Binary Option scams on the market. We also write Binary Option reviews on quality Binary Option software and Binary Option Brokers as well. Good luck trading, Julia Armstrong Binary Options Review Panther
It's that time of year again, and we've got a new version of macOS on our hands! This year we've finally jumped off the 10.xx naming scheme and now going to 11! And with that, a lot has changed under the hood in macOS. As with previous years, we'll be going over what's changed in macOS and what you should be aware of as a macOS and Hackintosh enthusiast.
Has Nvidia Support finally arrived?
What has changed on the surface
A whole new iOS-like UI
macOS Snapshotting
Broken Kexts in Big Sur
What has changed under the hood
New Kernel cache system: KernelCollections!
New Kernel Requirements
Secure Boot Changes
No more symbols required
Broken Kexts in Big Sur
MSI Navi installer Bug Resolved
New AMD OS X Kernel Patches
Other notable Hackintosh issues
Several SMBIOS have been dropped
Dropped hardware
Extra long install process
X79 and X99 Boot issues
New RTC requirements
SATA Issues
Legacy GPU Patches currently unavailable
What’s new in the Hackintosh scene?
Dortania: a new organization has appeared
Dortania's Build Repo
True legacy macOS Support!
Intel Wireless: More native than ever!
Clover's revival? A frankenstein of a bootloader
Death of x86 and the future of Hackintoshing
Getting ready for macOS 11, Big Sur
Has Nvidia Support finally arrived?
Sadly every year I have to answer the obligatory question, no there is no new Nvidia support. Currently Nvidia's Kepler line is the only natively supported gen. However macOS 11 makes some interesting changes to the boot process, specifically moving GPU drivers into stage 2 of booting. Why this is relevant is due to Apple's initial reason for killing off Web Drivers: Secure boot. What I mean is that secure boot cannot work with Nvidia's Web Drivers due to how early Nvidia's drivers have to initialize at, and thus Apple refused to sign the binaries. With Big Sur, there could be 3rd party GPUs however the chances are still super slim but slightly higher than with 10.14 and 10.15.
What has changed on the surface
A whole new iOS-like UI
Love it or hate it, we've got a new UI more reminiscent of iOS 14 with hints of skeuomorphism(A somewhat subtle call back to previous mac UIs which have neat details in the icons) You can check out Apple's site to get a better idea:
A feature initially baked into APFS back in 2017 with the release of macOS 10.13, High Sierra, now macOS's main System volume has become both read-only and snapshotted. What this means is:
3rd parties have a much more difficult time modifying the system volume, allowing for greater security
OS updates can now be installed while you're using the OS, similar to how iOS handles updates
Time Machine can now more easily perform backups, without file inconsistencies with HFS Plus while you were using the machines
However there are a few things to note with this new enforcement of snapshotting:
OS snapshots are not calculated as used space, instead being labeled as purgeable space
Disabling macOS snapshots for the root volume with break software updates, and can corrupt data if one is applied
What has changed under the hood
Quite a few things actually! Both in good and bad ways unfortunately.
New Kernel Cache system: KernelCollections!
So for the past 15 years, macOS has been using the Prelinked Kernel as a form of Kernel and Kext caching. And with macOS Big Sur's new Read-only, snapshot based system volume, a new version of caching has be developed: KernelCollections! How this differs to previous OSes:
Kexts can no longer be hot-loaded, instead requiring a reboot to load with kmutil
With regards to Secure Boot, now all officially supported Macs will also now support some form of Secure Boot even if there's no T2 present. This is now done in 2 stages:
macOS will now always verify the ECID value to the secure boot manifest files(if present)
On T2's this ECID value is burned into the chip
On regular Macs, the first 8 bytes of your SystemUUID value
OS Snapshots are now verified on each boot to ensure no system volume modifications occurred
apfs.kext and AppleImage4.kext verify the integrity of these snapshots
While technically these security features are optional and can be disabled after installation, many features including OS updates will no longer work reliably once disabled. This is due to the heavy reliance of snapshots for OS updates, as mentioned above and so we highly encourage all users to ensure at minimum SecureBootModel is set to Default or higher.
Note: ApECID is not required for functionality, and can be skipped if so desired.
Note 2: OpenCore 0.6.3 or newer is required for Secure Boot in Big Sur.
No more symbols required
This point is the most important part, as this is what we use for kext injection in OpenCore. Currently Apple has left symbols in place seemingly for debugging purposes however this is a bit worrying as Apple could outright remove symbols in later versions of macOS. But for Big Sur's cycle, we'll be good on that end however we'll be keeping an eye on future releases of macOS.
New Kernel Requirements
With this update, the AvoidRuntimeDefrag Booter quirk in OpenCore broke. Because of this, the macOS kernel will fall flat when trying to boot. Reason for this is due to cpu_count_enabled_logical_processors requiring the MADT (APIC) table, and so OpenCore will now ensure this table is made accessible to the kernel. Users will however need a build of OpenCore 0.6.0 with commit bb12f5for newer to resolve this issue. Additionally, both Kernel Allocation requirements and Secure Boot have also broken with Big Sur due to the new caching system discussed above. Thankfully these have also been resolved in OpenCore 0.6.3. To check your OpenCore version, run the following in terminal: nvram 4D1FDA02-38C7-4A6A-9CC6-4BCCA8B30102:opencore-version If you're not up-to-date and running OpenCore 0.6.3+, see here on how to upgrade OpenCore: Updating OpenCore, Kexts and macOS
Broken Kexts in Big Sur
Unfortunately with the aforementioned KernelCollections, some kexts have unfortunately broken or have been hindered in some way. The main kexts that currently have issues are anything relying on Lilu's userspace patching functionality:
Thankfully most important kexts rely on kernelspace patcher which is now in fact working again.
MSI Navi installer Bug Resolved
For those receiving boot failures in the installer due to having an MSI Navi GPU installed, macOS Big Sur has finally resolved this issue!
New AMD OS X Kernel Patches
For those running on AMD-Based CPUs, you'll want to also update your kernel patches as well since patches have been rewritten for macOS Big Sur support:
Big Sur dropped a few Ivy Bridge and Haswell based SMBIOS from macOS, so see below that yours wasn't dropped:
iMac14,3 and older
Note iMac14,4 is still supported
MacPro5,1 and older
MacMini6,x and older
MacBook7,1 and older
MacBookAir5,x and older
MacBookPro10,x and older
If your SMBIOS was supported in Catalina and isn't included above, you're good to go! We also have a more in-depth page here: Choosing the right SMBIOS For those wanting a simple translation for their Ivy and Haswell Machines:
iMac13,1 should transition over to using iMac14,4
iMac13,2 should transition over to using iMac15,1
iMac14,2 and iMac14,3 should transition over to using iMac15,1
Note: AMD CPUs users should transition over to MacPro7,1
iMac14,1 should transition over to iMac14,4
Dropped hardware
Currently only certain hardware has been officially dropped:
"Official" Consumer Ivy Bridge Support(U, H and S series)
These CPUs will still boot without much issue, but note that no Macs are supported with consumer Ivy Bridge in Big Sur.
Ivy Bridge-E CPUs are still supported thanks to being in MacPro6,1
Ivy Bridge iGPUs slated for removal
HD 4000 and HD 2500, however currently these drivers are still present in 11.0.1
Similar to Mojave and Nvidia's Tesla drivers, we expect Apple to forget about them and only remove them in the next major OS update next year
Note, while AirPortBrcm4360.kext has been removed in Big Sur, support for the 4360 series cards have been moved into AirPortBrcmNIC.kext, which still exists.
Due to the new snapshot-based OS, installation now takes some extra time with sealing. If you get stuck at Forcing CS_RUNTIME for entitlement, do not shutdown. This will corrupt your install and break the sealing process, so please be patient.
X79 and X99 Boot issues
With Big Sur, IOPCIFamily went through a decent rewriting causing many X79 and X99 boards to fail to boot as well as panic on IOPCIFamily. To resolve this issue, you'll need to disable the unused uncore bridge:
With macOS Big Sur, AppleRTC has become much more picky on making sure your OEM correctly mapped the RTC regions in your ACPI tables. This is mainly relevant on Intel's HEDT series boards, I documented how to patch said RTC regions in OpenCorePkg:
For those having boot issues on X99 and X299, this section is super important; you'll likely get stuck at PCI Configuration Begin. You can also find prebuilts here for those who do not wish to compile the file themselves:
For some reason, Apple removed the AppleIntelPchSeriesAHCI class from AppleAHCIPort.kext. Due to the outright removal of the class, trying to spoof to another ID (generally done by SATA-unsupported.kext) can fail for many and create instability for others. * A partial fix is to block Big Sur's AppleAHCIPort.kext and inject Catalina's version with any conflicting symbols being patched. You can find a sample kext here: Catalina's patched AppleAHCIPort.kext * This will work in both Catalina and Big Sur so you can remove SATA-unsupported if you want. However we recommend setting the MinKernel value to 20.0.0 to avoid any potential issues.
Legacy GPU Patches currently unavailable
Due to major changes in many frameworks around GPUs, those using ASentientBot's legacy GPU patches are currently out of luck. We either recommend users with these older GPUs stay on Catalina until further developments arise or buy an officially supported GPU
What’s new in the Hackintosh scene?
Dortania: a new organization has appeared
As many of you have probably noticed, a new organization focusing on documenting the hackintoshing process has appeared. Originally under my alias, Khronokernel, I started to transition my guides over to this new family as a way to concentrate the vast amount of information around Hackintoshes to both ease users and give a single trusted source for information. We work quite closely with the community and developers to ensure information's correct, up-to-date and of the best standards. While not perfect in every way, we hope to be the go-to resource for reliable Hackintosh information. And for the times our information is either outdated, missing context or generally needs improving, we have our bug tracker to allow the community to more easily bring attention to issues and speak directly with the authors:
Kexts here are built right after commit, and currently supports most of Acidanthera's kexts and some 3rd party devs as well. If you'd like to add support for more kexts, feel free to PR: Build Repo source
True legacy macOS Support!
As of OpenCore's latest versioning, 0.6.2, you can now boot every version of x86-based builds of OS X/macOS! A huge achievement on @Goldfish64's part, we now support every major version of kernel cache both 32 and 64-bit wise. This means machines like Yonah and newer should work great with OpenCore and you can even relive the old days of OS X like OS X 10.4! And Dortania guides have been updated accordingly to accommodate for builds of those eras, we hope you get as much enjoyment going back as we did working on this project!
Intel Wireless: More native than ever!
Another amazing step forward in the Hackintosh community, near-native Intel Wifi support! Thanks to the endless work on many contributors of the OpenIntelWireless project, we can now use Apple's built-in IO80211 framework to have near identical support to those of Broadcom wireless cards including features like network access in recovery and control center support. For more info on the developments, please see the itlwm project on GitHub: itlwm
Note, native support requires the AirportItlwm.kext and SecureBootModel enabled on OpenCore. Alternatively you can force IO80211Family.kext to ensure AirportItlwm works correctly.
Airdrop support currently is also not implemented, however is actively being worked on.
Clover's revival? A frankestien of a bootloader
As many in the community have seen, a new bootloader popped up back in April of 2019 called OpenCore. This bootloader was made by the same people behind projects such as Lilu, WhateverGreen, AppleALC and many other extremely important utilities for both the Mac and Hackintosh community. OpenCore's design had been properly thought out with security auditing and proper road mapping laid down, it was clear that this was to be the next stage of hackintoshing for the years we have left with x86. And now lets bring this back to the old crowd favorite, Clover. Clover has been having a rough time of recent both with the community and stability wise, with many devs jumping ship to OpenCore and Clover's stability breaking more and more with C++ rewrites, it was clear Clover was on its last legs. Interestingly enough, the community didn't want Clover to die, similarly to how Chameleon lived on through Enoch. And thus, we now have the Clover OpenCore integration project(Now merged into Master with r5123+). The goal is to combine OpenCore into Clover allowing the project to live a bit longer, as Clover's current state can no longer boot macOS Big Sur or older versions of OS X such as 10.6. As of writing, this project seems to be a bit confusing as there seems to be little reason to actually support Clover. Many of Clover's properties have feature-parity in OpenCore and trying to combine both C++ and C ruins many of the features and benefits either languages provide. The main feature OpenCore does not support is macOS-only ACPI injection, however the reasoning is covered here: Does OpenCore always inject SMBIOS and ACPI data into other OSes?
Death of x86 and the future of Hackintoshing
With macOS Big Sur, a big turning point is about to happen with Apple and their Macs. As we know it, Apple will be shifting to in-house designed Apple Silicon Macs(Really just ARM) and thus x86 machines will slowly be phased out of their lineup within 2 years. What does this mean for both x86 based Macs and Hackintoshing in general? Well we can expect about 5 years of proper OS support for the iMac20,x series which released earlier this year with an extra 2 years of security updates. After this, Apple will most likely stop shipping x86 builds of macOS and hackintoshing as we know it will have passed away. For those still in denial and hope something like ARM Hackintoshes will arrive, please consider the following:
We have yet to see a true iPhone "Hackintosh" and thus the likely hood of an ARM Hackintosh is unlikely as well
There have been successful attempts to get the iOS kernel running in virtual machines, however much work is still to be done
Apple's use of "Apple Silicon" hints that ARM is not actually what future Macs will be running, instead we'll see highly customized chips based off ARM
For example, Apple will be heavily relying on hardware features such as WX, kernel memory protection, Pointer Auth, etc for security and thus both macOS and Applications will be dependant on it. This means hackintoshing on bare-metal(without a VM) will become extremely difficult without copious amounts of work
Also keep in mind Apple Silicon will no longer be UEFI-based like Intel Macs currently are, meaning a huge amount of work would also be required on this end as well
So while we may be heart broken the journey is coming to a stop in the somewhat near future, hackintoshing will still be a time piece in Apple's history. So enjoy it now while we still can, and we here at Dortania will still continue supporting the community with our guides till the very end!
Getting ready for macOS 11, Big Sur
This will be your short run down if you skipped the above:
Lilu's userspace patcher is broken
Due to this many kexts will break:
DiskArbitrationFixup
MacProMemoryNotificationDisabler
SidecarEnabler
SystemProfilerMemoryFixup
NoTouchID
WhateverGreen's DRM and -cdfon patches
Many Ivy Bridge and Haswell SMBIOS were dropped
See above for what SMBIOS to choose
Ivy Bridge iGPUs are to be dropped
Currently in 11.0.1, these drivers are still present
For the last 2, see here on how to update: Updating OpenCore, Kexts and macOS In regards to downloading Big Sur, currently gibMacOS in macOS or Apple's own software updater are the most reliable methods for grabbing the installer. Windows and Linux support is still unknown so please stand by as we continue to look into this situation, macrecovery.py may be more reliable if you require the recovery package. And as with every year, the first few weeks to months of a new OS release are painful in the community. We highly advise users to stay away from Big Sur for first time installers. The reason is that we cannot determine whether issues are Apple related or with your specific machine, so it's best to install and debug a machine on a known working OS before testing out the new and shiny. For more in-depth troubleshooting with Big Sur, see here: OpenCore and macOS 11: Big Sur
Red Hat OpenShift Container Platform Instruction Manual for Windows Powershell
Introduction to the manual This manual is made to guide you step by step in setting up an OpenShift cloud environment on your own device. It will tell you what needs to be done, when it needs to be done, what you will be doing and why you will be doing it, all in one convenient manual that is made for Windows users. Although if you'd want to try it on Linux or MacOS we did add the commands necesary to get the CodeReady Containers to run on your operating system. Be warned however there are some system requirements that are necessary to run the CodeReady Containers that we will be using. These requirements are specified within chapter Minimum system requirements. This manual is written for everyone with an interest in the Red Hat OpenShift Container Platform and has at least a basic understanding of the command line within PowerShell on Windows. Even though it is possible to use most of the manual for Linux or MacOS we will focus on how to do this within Windows. If you follow this manual you will be able to do the following items by yourself: ● Installing the CodeReady Containers ● Updating OpenShift ● Configuring a CodeReady Container ● Configuring the DNS ● Accessing the OpenShift cluster ● Deploying the Mediawiki application What is the OpenShift Container platform? Red Hat OpenShift is a cloud development Platform as a Service (PaaS). It enables developers to develop and deploy their applications on a cloud infrastructure. It is based on the Kubernetes platform and is widely used by developers and IT operations worldwide. The OpenShift Container platform makes use of CodeReady Containers. CodeReady Containers are pre-configured containers that can be used for developing and testing purposes. There are also CodeReady Workspaces, these workspaces are used to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment. The OpenShift Container Platform is widely used because it helps the programmers and developers make their application faster because of CodeReady Containers and CodeReady Workspaces and it also allows them to test their application in the same environment. One of the advantages provided by OpenShift is the efficient container orchestration. This allows for faster container provisioning, deploying and management. It does this by streamlining and automating the automation process. What knowledge is required or recommended to proceed with the installation? To be able to follow this manual some knowledge is mandatory, because most of the commands are done within the Command Line interface it is necessary to know how it works and how you can browse through files/folders. If you either don’t have this basic knowledge or have trouble with the basic Command Line Interface commands from PowerShell, then a cheat sheet might offer some help. We recommend the following cheat sheet for windows: ● Https://www.sans.org/security-resources/sec560/windows\_command\_line\_sheet\_v1.pdf Another option is to read through the operating system’s documentation or introduction guides. Though the documentation can be overwhelming by the sheer amount of commands. ● Microsoft:https://docs.microsoft.com/en-us/windows-serveadministration/windows-commands/windows-commands ● MacOS Https://www.makeuseof.com/tag/mac-terminal-commands-cheat-sheet/ ● Linux https://ubuntu.com/tutorials/command-line-for-beginners#2-a-brief-history-lessonhttps://www.guru99.com/linux-commands-cheat-sheet.html http://cc.iiti.ac.in/docs/linuxcommands.pdf Aside from the required knowledge there are also some things that can be helpful to know just to make the use of OpenShift a bit simpler. This consists of some general knowledge on PaaS like Dockers and Kubernetes.
The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum hardware: Hardware requirements Code Ready Containers requires the following system resources: ● 4 virtual CPU’s ● 9 GB of free random-access memory ● 35 GB of storage space ● Physical CPU with Hyper-V (intel) or SVM mode (AMD) this has to be enabled in the bios Software requirements The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum operating system requirements: Microsoft Windows On Microsoft Windows, the Red Hat OpenShift CodeReady Containers requires the Windows 10 Pro Fall Creators Update (version 1709) or newer. CodeReady Containers does not work on earlier versions or other editions of Microsoft Windows. Microsoft Windows 10 Home Edition is not supported. macOS On macOS, the Red Hat OpenShift CodeReady Containers requires macOS 10.12 Sierra or newer. Linux On Linux, the Red Hat OpenShift CodeReady Containers is only supported on Red Hat Enterprise Linux/CentOS 7.5 or newer and on the latest two stable Fedora releases. When using Red Hat Enterprise Linux, the machine running CodeReady Containers must be registered with the Red Hat Customer Portal. Ubuntu 18.04 LTS or newer and Debian 10 or newer are not officially supported and may require manual set up of the host machine.
Required additional software packages for Linux
The CodeReady Containers on Linux require the libvirt and Network Manager packages to run. Consult the following table to find the command used to install these packages for your Linux distribution: Table 1.1 Package installation commands by distribution
To install CodeReady Containers a few steps must be undertaken. Because an OpenShift account is necessary to use the application this will be the first step. An account can be made on “https://www.openshift.com/”, where you need to press login and after that select the option “Create one now” After making an account the next step is to download the latest release of CodeReady Containers and the pulled secret on “https://cloud.redhat.com/openshift/install/crc/installer-provisioned”. Make sure to download the version corresponding to your platform and/or operating system. After downloading the right version, the contents have to be extracted from the archive to a location in your $PATH. The pulled secret should be saved because it is needed later. The command line interface has to be opened before we can continue with the installation. For windows we will use PowerShell. All the commands we use during the installation procedure of this guide are going to be done in this command line interface unless stated otherwise. To be able to run the commands within the command line interface, use the command line interface to go to the location in your $PATH where you extracted the CodeReady zip. If you have installed an outdated version and you wish to update, then you can delete the existing CodeReady Containers virtual machine with the $crc deletecommand. After deleting the container, you must replace the old crc binary with a newly downloaded binary of the latest release.
C:\Users\[username]\$PATH>crc delete
When you have done the previous steps please confirm that the correct and up to date crc binary is in use by checking it with the $crc version command, this should provide you with the version that is currently installed.
C:\Users\[username]\$PATH>crc version
To set up the host operating system for the CodeReady Containers virtual machine you have to run the $crc setup command. After running crc setup, crc start will create a minimal OpenShift 4 cluster in the folder where the executable is located.
C:\Users\[username]>crc setup
Setting up CodeReady Containers
Now we need to set up the new CodeReady Containers release with the $crc setup command. This command will perform the operations necessary to run the CodeReady Containers and create the ~/.crc directory if it did not previously exist. In the process you have to supply your pulled secret, once this process is completed you have to reboot your system. When the system has restarted you can start the new CodeReady Containers virtual machine with the $crc start command. The $crc start command starts the CodeReady virtual machine and OpenShift cluster. You cannot change the configuration of an existing CodeReady Containers virtual machine. So if you have a CodeReady Containers virtual machine and you want to make configuration changes you need to delete the virtual machine with the $crc deletecommand and create a new virtual machine and start that one with the configuration changes. Take note that deleting the virtual machine will also delete the data stored in the CodeReady Containers. So, to prevent data loss we recommend you save the data you wish to keep. Also keep in mind that it is not necessary to change the default configuration to start OpenShift.
C:\Users\[username]\$PATH>crc setup
Before starting the machine, you need to keep in mind that it is not possible to make any changes to the virtual machine. For this tutorial however it is not necessary to change the configuration, if you don’t want to make any changes please continue by starting the machine with the crc start command.
C:\Users\[username]\$PATH>crc start
\ it is possible that you will get a Nameserver error later on, if this is the case please start it with* crc start -n 1.1.1.1
Configuration
It is not is not necessary to change the default configuration and continue with this tutorial, this chapter is here for those that wish to do so and know what they are doing. However, for MacOS and Linux it is necessary to change the dns settings.
Configuring the CodeReady Containers
To start the configuration of the CodeReady Containers use the command crc config. This command allows you to configure the crc binary and the CodeReady virtual machine. The command has some requirements before it’s able to configure. This requirement is a subcommand, the available subcommands for this binary and virtual machine are: ● get, this command allows you to see the values of a configurable property ● set/unset, this command can be used for 2 things. To display the names of, or to set and/or unset values of several options and parameters. These parameters being: ○ Shell options ○ Shell attributes ○ Positional parameters ● view, this command starts the configuration in read-only mode. These commands need to operate on named configurable properties. To list all the available properties, you can run the command $crc config --help. Throughout this manual we will use the $crc config command a few times to change some properties needed for the configuration. There is also the possibility to use the crc config command to configure the behavior of the checks that’s done by the $crc start end $crc setup commands. By default, the startup checks will stop with the process if their conditions are not met. To bypass this potential issue, you can set the value of a property that starts with skip-check or warn-check to true to skip the check or warning instead of ending up with an error.
C:\Users\[username]\$PATH>crc config get C:\Users\[username]\$PATH>crc config set C:\Users\[username]\$PATH>crc config unset C:\Users\[username]\$PATH>crc config view C:\Users\[username]\$PATH>crc config --help
Configuring the Virtual Machine
You can use the CPUs and memory properties to configure the default number of vCPU’s and amount of memory available for the virtual machine. To increase the number of vCPU’s available to the virtual machine use the $crc config set CPUs . Keep in mind that the default number for the CPU’s is 4 and the number of vCPU’s you wish to assign must be equal or greater than the default value. To increase the memory available to the virtual machine, use the $crc config set memory . Keep in mind that the default number for the memory is 9216 Mebibytes and the amount of memory you wish to assign must be equal or greater than the default value.
C:\Users\[username]\$PATH>crc config set CPUs C:\Users\[username]\$PATH>crc config set memory >
Configuring the DNS
Window / General DNS setup
There are two domain names used by the OpenShift cluster that are managed by the CodeReady Containers, these are: ● crc.testing, this is the domain for the core OpenShift services. ● apps-crc.testing, this is the domain used for accessing OpenShift applications that are deployed on the cluster. Configuring the DNS settings in Windows is done by executing the crc setup. This command automatically adjusts the DNS configuration on the system. When executing crc start additional checks to verify the configuration will be executed.
macOS DNS setup
MacOS expects the following DNS configuration for the CodeReady Containers ● The CodeReady Containers creates a file that instructs the macOS to forward all DNS requests for the testing domain to the CodeReady Containers virtual machine. This file is created at /etc/resolvetesting. ● The oc binary requires the following CodeReady Containers entry to function properly, api.crc.testing adds an entry to /etc/hosts pointing at the VM IPaddress.
Linux DNS setup
CodeReady containers expect a slightly different DNS configuration. CodeReady Container expects the NetworkManager to manage networking. On Linux the NetworkManager uses dnsmasq through a configuration file, namely /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf. To set it up properly the dnsmasq instance has to forward the requests for crc.testing and apps-crc.testing domains to “192.168.130.11”. In the /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf this will look like the following: ● Server=/crc. Testing/192.168.130.11 ● Server=/apps-crc. Testing/192.168.130.11
Accessing the Openshift Cluster
Accessing the Openshift web console
To gain access to the OpenShift cluster running in the CodeReady virtual machine you need to make sure that the virtual machine is running before continuing with this chapter. The OpenShift clusters can be accessed through the OpenShift web console or the client binary(oc). First you need to execute the $crc console command, this command will open your web browser and direct a tab to the web console. After that, you need to select the htpasswd_provider option in the OpenShift web console and log in as a developer user with the output provided by the crc start command. It is also possible to view the password for kubeadmin and developer users by running the $crc console --credentials command. While you can access the cluster through the kubeadmin and developer users, it should be noted that the kubeadmin user should only be used for administrative tasks such as user management and the developer user for creating projects or OpenShift applications and the deployment of these applications.
To gain access to the OpenShift cluster with the use of the oc command you need to complete several steps. Step 1. Execute the $crc oc-env command to print the command needed to add the cached oc binary to your PATH:
C:\Users\[username]\$PATH>crc oc-env
Step 2. Execute the printed command. The output will look something like the following:
PS C:\Users\OpenShift> crc oc-env $Env:PATH = "CC:\Users\OpenShift\.crc\bin\oc;$Env:PATH" # Run this command to configure your shell: # & crc oc-env | Invoke-Expression
This means we have to execute* the command that the output gives us, in this case that is:
\this has to be executed every time you start; a solution is to move the oc binary to the same path as the crc binary* To test if this step went correctly execute the following command, if it returns without errors oc is set up properly
C:\Users\[username]\$PATH>.\oc
Step 3 Now you need to login as a developer user, this can be done using the following command: $oc login -u developerhttps://api.crc.testing:6443 Keep in mind that the $crc start will provide you with the password that is needed to login with the developer user.
Step 4 The oc can now be used to interact with your OpenShift cluster. If you for instance want to verify if the OpenShift cluster Operators are available, you can execute the command
$oc get co
Keep in mind that by default the CodeReady Containers disables the functions provided by the commands $machine-config and $monitoringOperators.
C:\Users\[username]\$PATH>oc get co
Demonstration
Now that you are able to access the cluster, we will take you on a tour through some of the possibilities within OpenShift Container Platform. We will start by creating a project. Within this project we will import an image, and with this image we are going to build an application. After building the application we will explain how upscaling and downscaling can be used within the created application. As the next step we will show the user how to make changes in the network route. We also show how monitoring can be used within the platform, however within the current version of CodeReady Containers this has been disabled. Lastly, we will show the user how to use user management within the platform.
Creating a project
To be able to create a project within the console you have to login on the cluster. If you have not yet done this, this can be done by running the command crc console in the command line and logging in with the login data from before. When you are logged in as admin, switch to Developer. If you're logged in as a developer, you don't have to switch. Switching between users can be done with the dropdown menu top left. Now that you are properly logged in press the dropdown menu shown in the image below, from there click on create a project. https://preview.redd.it/ytax8qocitv51.png?width=658&format=png&auto=webp&s=72d143733f545cf8731a3cca7cafa58c6507ace2 When you press the correct button, the following image will pop up. Here you can give your project a name and description. We chose to name it CodeReady with a displayname CodeReady Container. https://preview.redd.it/vtaxadwditv51.png?width=594&format=png&auto=webp&s=e3b004bab39fb3b732d96198ed55fdd99259f210
There are a few ways to create and manage applications. Within this demonstration we’ll show how to create an application from the previously imported image.
In OpenShift there is a feature called autoscaling. There are two types of application scaling, namely vertical scaling, and horizontal scaling. Vertical scaling is adding only more CPU and hard disk and is no longer supported by OpenShift. Horizontal scaling is increasing the number of machines. One of the ways to scale an application is by increasing the number of pods. This can be done by going to a pod within the view as seen in the previous step. By either pressing the up or down arrow more pods of the same application can be added. This is similar to horizontal scaling and can result in better performance when there are a lot of active users at the same time. https://preview.redd.it/s6i1vbcrltv51.png?width=602&format=png&auto=webp&s=e62cbeeed116ba8c55704d61a990fc0d8f3cfaa1 In the picture above we see the number of nodes and pods and how many resources those nodes and pods are using. This is something to keep in mind if you want to scale up your application, the more you scale it up, the more resources it will take up. https://preview.redd.it/quh037wmitv51.png?width=194&format=png&auto=webp&s=5e326647b223f3918c259b1602afa1b5fbbeea94
Network
Since OpenShift Container platform is built on Kubernetes it might be interesting to know some theory about its networking. Kubernetes, on which the OpenShift Container platform is built, ensures that the Pods within OpenShift can communicate with each other via the network and assigns them their own IP address. This makes all containers within the Pod behave as if they were on the same host. By giving each pod its own IP address, pods can be treated as physical hosts or virtual machines in terms of port mapping, networking, naming, service discovery, load balancing, application configuration and migration. To run multiple services such as front-end and back-end services, OpenShift Container Platform has a built-in DNS. One of the changes that can be made to the networking of a Pod is the Route. We’ll show you how this can be done in this demonstration. The Route is not the only thing that can be changed and or configured. Two other options that might be interesting but will not be demonstrated in this manual are: - Ingress controller, Within OpenShift it is possible to set your own certificate. A user must have a certificate / key pair in PEM-encoded files, with the certificate signed by a trusted authority. - Network policies, by default all pods in a project are accessible from other pods and network locations. To isolate one or more pods in a project, it is possible to create Network Policy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete Network Policy objects within their own project. There is a search function within the Container Platform. We’ll use this to search for the network routes and show how to add a new route. https://preview.redd.it/8jkyhk8pitv51.png?width=769&format=png&auto=webp&s=9a8762df5bbae3d8a7c92db96b8cb70605a3d6da You can add items that you use a lot to the navigation https://preview.redd.it/t32sownqitv51.png?width=1598&format=png&auto=webp&s=6aab6f17bc9f871c591173493722eeae585a9232 For this example, we will add Routes to navigation. https://preview.redd.it/pm3j7ljritv51.png?width=291&format=png&auto=webp&s=bc6fbda061afdd0780bbc72555d809b84a130b5b Now that we’ve added Routes to the navigation, we can start the creation of the Route by clicking on “Create route”. https://preview.redd.it/5lgecq0titv51.png?width=1603&format=png&auto=webp&s=d548789daaa6a8c7312a419393795b52da0e9f75 Fill in the name, select the service and the target port from the drop-down menu and click on Create. https://preview.redd.it/qczgjc2uitv51.png?width=778&format=png&auto=webp&s=563f73f0dc548e3b5b2319ca97339e8f7b06c9d6 As you can see, we’ve successfully added the new route to our application. https://preview.redd.it/gxfanp2vitv51.png?width=1588&format=png&auto=webp&s=1aae813d7ad0025f91013d884fcf62c5e7d109f1 Storage OpenShift makes use of Persistent Storage, this type of storage uses persistent volume claims(PVC). PVC’s allow the developer to make persistent volumes without needing any knowledge about the underlying infrastructure. Within this storage there are a few configuration options:
Reclaim
Recycle
Delete
It is however important to know how to manually reclaim the persistent volumes, since if you delete PV the associated data will not be automatically deleted with it and therefore you cannot reassign the storage to another PV yet. To manually reclaim the PV, you need to follow the following steps: Step 1: Delete the PV, this can be done by executing the following command
$oc delete
Step 2: Now you need to clean up the data on the associated storage asset Step 3: Now you can delete the associated storage asset or if you with to reuse the same storage asset you can now create a PV with the storage asset definition. It is also possible to directly change the reclaim policy within OpenShift, to do this you would need to follow the following steps: Step 1: Get a list of the PVs in your cluster
$oc get pv
This will give you a list of all the PV’s in your cluster and will display their following attributes: Name, Capacity, Accesmodes, Reclaimpolicy, Statusclaim, Storageclass, Reason and Age. Step 2: Now choose the PV you wish to change and execute one of the following command’s, depending on your preferred policy:
In this example the reclaim policy will be changed to Delete. Step 3: After this you can check the PV to verify the change by executing this command again:
According to the documentation of OpenShift is a user, an entity that interacts with the OpenShift Container Platform API. These can be a developer for developing applications or an administrator for managing the cluster. Users can be assigned to groups, which set the permissions applied to all the group’s members. For example, you can give API access to a group, which gives all members of the group API access. There are multiple ways to create a user depending on the configured identity provider. The DenyAll identity provider is the default within OpenShift Container Platform. This default denies access for all the usernames and passwords. First, we’re going to create a new user, the way this is done depends on the identity provider, this depends on the mapping method used as part of the identity provider configuration. for more information on what mapping methods are and how they function: https://docs.openshift.com/enterprise/3.1/install_config/configuring_authentication.html With the default mapping method, the steps will be as following
$oc create user
Next up, we’ll create an OpenShift Container Platform Identity. Use the name of the identity provider and the name that uniquely represents this identity in the scope of the identity provider:
$oc create identity :
The is the name of the identity provider in the master configuration. For example, the following commands create an Identity with identity provider ldap_provider and the identity provider username mediawiki_s.
$oc create identity ldap_provider:mediawiki_s
Create a useidentity mapping for the created user and identity:
$oc create useridentitymapping :
For example, the following command maps the identity to the user:
There is a --clusterrole option that can be used to give the user a specific role, like a cluster user with admin privileges. The cluster admin has access to all files and is able to manage the access level of other users. Below is an example of the admin clusterrole command:
If you followed all the steps within this manual you now should have a functioning Mediawiki Application running on your own CodeReady Containers. During the installation of this application on CodeReady Containers you have learned how to do the following things: ● Installing the CodeReady Containers ● Updating OpenShift ● Configuring a CodeReady Container ● Configuring the DNS ● Accessing the OpenShift cluster ● Deploying an application ● Creating new users With these skills you’ll be able to set up your own Container Platform environment and host applications of your choosing.
Troubleshooting
Nameserver There is the possibility that your CodeReady container can't connect to the internet due to a Nameserver error. When this is encountered a working fix for us was to stop the machine and then start the CRC machine with the following command:
C:\Users\[username]\$PATH>crc start -n 1.1.1.1
Hyper-V admin Should you run into a problem with Hyper-V it might be because your user is not an admin and therefore can’t access the Hyper-V admin user group.
Click Start > Control Panel > Administration Tools > Computer Management. The Computer Management window opens.
Click System Tools > Local Users and Groups > Groups. The list of groups opens.
Double-click the Hyper-V Administrators group. The Hyper-V Administrators Properties window opens.
Click Add. The Select Users or Groups window opens.
In the Enter the object names to select field, enter the user account name to whom you want to assign permissions, and then click OK.
Click Apply, and then click OK.
Terms and definitions
These terms and definitions will be expanded upon, below you can see an example of how this is going to look like together with a few terms that will require definitions. ● Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Openshift is based on Kubernetes. ● Clusters are a collection of multiple nodes which communicate with each other to perform a set of operations. ● Containers are the basic units of OpenShift applications. These container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources. ● CodeReady Container is a minimal, preconfigured cluster that is used for development and testing purposes. ● CodeReady Workspaces uses Kubernetes and containers to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.
Ethereum on ARM. New Eth2.0 Raspberry Pi 4 image for joining the Medalla multi-client testnet. Step-by-step guide for installing and activating a validator (Prysm, Teku, Lighthouse and Nimbus clients included)
TL;DR: Flash your Raspberry Pi 4, plug in an ethernet cable, connect the SSD disk and power up the device to join the Eth2.0 medalla testnet. The image takes care of all the necessary steps to join the Eth2.0 Medalla multi-client testnet [1], from setting up the environment and formatting the SSD disk to installing, managing and running the Eth1.0 and Eth2.0 clients. You will only need to choose an Eth2.0 client, start the beacon chain service and activate / run the validator. Note: this is an update for our previous Raspberry Pi 4 Eth2 image [2] so some of the instructions are directly taken from there.
MAIN FEATURES
Based on Ubuntu 20.04 64bit.
Automatic USB disk partitioning and formatting
Adds swap memory (ZRAM kernel module + a swap file)
Changes the hostname to something like “ethnode-e2a3e6fe” based on MAC hash
Automatically syncs Eth1 Goerli testnet (Geth)
Includes an APT repository for installing and upgrading Ethereum software
Includes 4 Eth2.0 clients
Includes EF eth2.0-deposit-cli tool
Includes 5 monitoring dashboards based on Grafana / Prometheus
SOFTWARE INCLUDED
Geth: 1.9.20 [3] (official binary) configured for syncing Goerli Testnets
Eth2.0-deposit-cli: 0.2.1 (bundled) [4]
Prysm: 1.0.0alpha24 [5]
Beacon Chain (official binary)
Validator binary (official binary)
Teku: 0.12.4alpha+20200821 (compiled) [6]
Lighthouse 0.2.8 (official binary) [7]
Nimbus 0.5.0 (compiled) [8]
Grafana 7.0.4 (official package) [9]
INSTALLATION GUIDE AND USAGE
RECOMMENDED HARDWARE AND SETUP
Raspberry 4 (model B) - 4GB or 8GB (8 GB RAM highly recommended)
MicroSD Card (16 GB Class 10 minimum)
SSD USB 3.0 disk (see storage section)
Power supply
Ethernet cable
Port forwarding
A case with heatsink and fan (Optional but strongly recommended)
USB keyboard, Monitor and HDMI cable (micro-HDMI) (Optional)
STORAGE You will need an SSD to run the Ethereum clients (without an SSD drive there’s absolutely no chance of syncing the Ethereum blockchain). There are 2 options: Use an USB portable SSD disk such as the Samsung T5 Portable SSD. Use an USB 3.0 External Hard Drive Case with a SSD Disk. In our case we used a Inateck 2.5 Hard Drive Enclosure FE2011. Make sure to buy a case with an UASP compliant chip, particularly, one of these: JMicron (JMS567 or JMS578) or ASMedia (ASM1153E). In both cases, avoid getting low quality SSD disks as it is a key component of your node and it can drastically affect the performance (and sync times). Keep in mind that you need to plug the disk to an USB 3.0 port (in blue). IMAGE DOWNLOAD AND INSTALLATION 1.- Download the image: http://www.ethraspbian.com/downloads/ubuntu-20.04.1-preinstalled-server-arm64+raspi-eth2-medalla.img.zip SHA256 149cb9b020d1c49fcf75c00449c74c6f38364df1700534b5e87f970080597d87 2.- Flash the image Insert the microSD in your Desktop / Laptop and download the file. Note: If you are not comfortable with command line or if you are running Windows, you can use Etcher [10] Open a terminal and check your MicroSD device name running: sudo fdisk -l You should see a device named mmcblk0 or sdd. Unzip and flash the image: unzip ubuntu-20.04.1-preinstalled-server-arm64+raspi-eth2-medalla.img.zip sudo dd bs=1M if=ubuntu-20.04.1-preinstalled-server-arm64+raspi.img of=/dev/mmcblk0 conv=fdatasync status=progress 3.- Insert de MicroSD into the Raspberry Pi 4. Connect an Ethernet cable and attach the USB SSD disk (make sure you are using a blue port). 4.- Power on the device The Ubuntu OS will boot up in less than one minute but you will need to wait approximately 7-8 minutes in order to allow the script to perform the necessary tasks to install the Medalla setup (it will reboot again) 5.- Log in You can log in through SSH or using the console (if you have a monitor and keyboard attached)
User: ethereum Password: ethereum
You will be prompted to change the password on first login, so you will need to log in twice. 6.- Forward 30303 port in your router (both UDP and TCP). If you don’t know how to do this, google “port forwarding” followed by your router model. You will need to open additional ports as well depending on the Eth2.0 client you’ve chosen. 7.- Getting console output You can see what’s happening in the background by typing: sudo tail -f /valog/syslog 8.- Grafana Dashboards There are 5 Grafana dashboards available to monitor the Medalla node (see section “Grafana Dashboards” below).
The Medalla Eth2.0 multi-client testnet
Medalla is the official Eth2.0 multi-client testnet according to the latest official specification for Eth2.0, the v0.12.2 [11] release (which is aimed to be the final) [12]. In order to run a Medalla Eth 2.0 node you will need 3 components:
An Eth1.0 node running the Goerli testnet in sync [13]. Geth in our case.
An Eth2.0 Beacon Chain connected to the Eth1.0 node. You will need to choose a client here (Prysm, Lighthouse, Teku or Nimbus)
An Eth2.0 Validator connected to the Beacon Chain (same client as the Beacon Chain)
The image takes care of the Eth1.0 setup. So, once flashed (and after a first reboot), Geth (Eth1.0 client) starts to sync the Goerli testnet. Follow these steps to enable your Eth2.0 Ethereum node: CREATE THE VALIDATOR KEYS AND MAKE THE DEPOSIT We need to get 32 Goerli ETH (fake ETH) ir order to make the deposit in the Eth2.0 contract and run the validator. The easiest way of getting ETH is by joining Prysm Discord's channel. Open Metamask [14], select the Goerli Network (top of the window) and copy your ETH Address. Go to: https://discord.com/invite/YMVYzv6 And open the “request-goerli-eth” channel (on the left) Type: !send $YOUR_ETH_ADDRESS (replace it with the one copied on Metamask) You will receive enough ETH to run 1 validator. Now it is time to create your validator keys and the deposit information. For your convenience we’ve packaged the official Eth2 launchpad tool [4]. Go to the EF Eth2.0 launchpad site: https://medalla.launchpad.ethereum.org/ And click “Get started” Read and accept all warnings. In the next screen, select 1 validator and go to your Raspberry Pi console. Under the ethereum account run: cd && deposit --num_validators 1 --chain medalla Choose your mnemonic language and type a password for keeping your keys safe. Write down your mnemonic password, press any key and type it again as requested. Now you have 2 Json files under the validator_keys directory. A deposit data file for sending the 32 ETH along with your validator public key to the Eth1 chain (goerli testnet) and a keystore file with your validator keys. Back to the Launchpad website, check "I am keeping my keys safe and have written down my mnemonic phrase" and click "Continue". It is time to send the 32 ETH deposit to the Eth1 chain. You need the deposit file (located in your Raspberry Pi). You can, either copy and paste the file content and save it as a new file in your desktop or copy the file from the Raspberry to your desktop through SSH. 1.- Copy and paste: Connected through SSH to your Raspberry Pi, type: cat validator_keys/deposit_data-$FILE-ID.json (replace $FILE-ID with yours) Copy the content (the text in square brackets), go back to your desktop, paste it into your favourite editor and save it as a json file. Or 2.- Ssh: From your desktop, copy the file: scp [email protected]$YOUR_RASPBERRYPI_IP:/home/ethereum/validator_keys/deposit_data-$FILE_ID.json /tmp Replace the variables with your data. This will copy the file to your desktop /tmp directory. Upload the deposit file Now, back to the Launchpad website, upload the deposit_data file and select Metamask, click continue and check all warnings. Continue and click “Initiate the Transaction”. Confirm the transaction in Metamask and wait for the confirmation (a notification will pop up shortly). The Beacon Chain (which is connected to the Eth1 chain) will detect this deposit (that includes the validator public key) and the Validator will be enabled. Congrats!, you just started your validator activation process. CHOOSE AN ETH2.0 CLIENT Time to choose your Eth2.0 client. We encourage you to run Lighthouse, Teku or Nimbus as Prysm is the most used client by far and diversity is key to achieve a resilient and healthy Eth2.0 network. Once you have decided which client to run (as said, try to run one with low network usage), you need to set up the clients and start both, the beacon chain and the validator. These are the instructions for enabling each client (Remember, choose just one Eth2.0 client out of 4): LIGHTHOUSE ETH2.0 CLIENT 1.- Port forwarding You need to open the 9000 port in your router (both UDP and TCP) 2.- Start the beacon chain Under the ethereum account, run: sudo systemctl enable lighthouse-beacon sudo systemctl start lighthouse-beacon 3.- Start de validator We need to import the validator keys. Run under the ethereum account: lighthouse account validator import --directory=/home/ethereum/validator_keys Then, type your previously defined password and run: sudo systemctl enable lighthouse-validator sudo systemctl start lighthouse-validator The Lighthouse beacon chain and validator are now enabled PRYSM ETH2.0 CLIENT 1.- Port forwarding You need to open the 13000 and 12000 ports in your router (both UDP and TCP) 2.- Start the beacon chain Under the ethereum account, run: sudo systemctl enable prysm-beacon sudo systemctl start prysm-beacon 3.- Start de validator We need to import the validator keys. Run under the ethereum account: validator accounts-v2 import --keys-dir=/home/ethereum/validator_keys Accept the default wallet path and enter a password for your wallet. Now enter the password previously defined. Lastly, set up your password and start the client: echo "$YOUR_PASSWORD" > /home/ethereum/validator_keys/prysm-password.txt sudo systemctl enable prysm-validator sudo systemctl start prysm-validator The Prysm beacon chain and the validator are now enabled. TEKU ETH2.0 CLIENT 1.- Port forwarding You need to open the 9151 port (both UDP and TCP) 2.- Start the Beacon Chain and the Validator Under the Ethereum account, check the name of your keystore file: ls /home/ethereum/validator_keys/keystore* Set the keystore file name in the teku config file (replace the $KEYSTORE_FILE variable with the file listed above) sudo sed -i 's/changeme/$KEYSTORE_FILE/' /etc/ethereum/teku.conf Set the password previously entered: echo "yourpassword" > validator_keys/teku-password.txt Start the beacon chain and the validator: sudo systemctl enable teku sudo systemctl start teku The Teku beacon chain and validator are now enabled. NIMBUS ETH2.0 CLIENT 1.- Port forwarding You need to open the 19000 port (both UDP and TCP) 2.- Start the Beacon Chain and the Validator We need to import the validator keys. Run under the ethereum account: beacon_node deposits import /home/ethereum/validator_keys --data-dir=/home/ethereum/.nimbus --log-file=/home/ethereum/.nimbus/nimbus.log Enter the password previously defined and run: sudo systemctl enable nimbus sudo systemctl start nimbus The Nimbus beacon chain and validator are now enabled. WHAT's NEXT Now you need to wait for the Eth1 blockchain and the beacon chain to get synced. In a few hours the validator will get enabled and put into a queue. These are the validator status that you will see until its final activation:
UNKNOWN STATUS
DEPOSITED (the beacon chain detected the 32 ETH deposit with your validator public key)
PENDING (you are in a queue for being activated)
ACTIVATED
Finally, it will get activated and the staking process will start. Congratulations!, you join the Medalla Eth2.0 multiclient testnet!
Grafana Dashboards
We configured 5 Grafana Dashboards to let users monitor both Eth1.0 and Eth2.0 clients. To access the dashboards just open your browser and type your Raspberry IP followed by the 3000 port:
Lots of info here. You can see for example if Geth is in sync by checking (in the Blockchain section) if Headers, Receipts and Blocks fields are aligned or find Eth2.0 chain info.
Updating the software
We will be keeping the Eth2.0 clients updated through Debian packages in order to keep up with the testnet progress. Basically, you need to update the repo and install the packages through the apt command. For instance, in order to update all packages you would run: sudo apt-get update && sudo apt-get install geth teku nimbus prysm-beacon prysm-validator lighthouse-beacon lighthouse-validator Please follow us on Twitter in order to get regular updates and install instructions. https://twitter.com/EthereumOnARM
How to generate (relative) secure paper wallets and spend them (Newbies)
How to generate (relative) secure paper walletsEveryone is invited to suggest improvements, make it easier, more robust, provide alternativers, comment on what they like or not, and also critizice it. Also, this is a disclaimer: I'm new to all of this. First, I didn't buy a hardware wallet because they are not produce in my country and I couldnt' trust they are not tampered. So the other way was to generate it myself. (Not your keys not your money) I've instructed myself several weeks reading various ways of generating wallets (including Glacier). As of now, I think this is THE BEST METHOD for a non-technical person which is high security and low cost and not that much lenghty. FAQs:Why I didn't use Coleman's BIP 39 mnemonic method? Basically, I dont know how to audit the code. As a downside, we will have to really write down accurately our keys having in mind that a mistype is fatal. Also, we should keep in mind that destruction of the key is fatal as well. The user has to secure the key from losing the keys, theft and destruction. Lets start You'll need:
an old computer without hard drive that will never touch the internet again.
a USB stick of 8gb or more or CD (never use this USB again for other purposes or connect it to an online computer)
a Dice (for using coins you will need to flip it 256 times and convert Binary to HEX)
Notes: We will be following https://www.swansontec.com/bitcoin-dice.html guidelines. We will be creating our own random key instead of downloading BitAddress javascript for safety reasons. Following this guideline lets you audit the code that will create the public address and bitcoin address. Its simple, short and you can always test the code by inputting a known private keys to tell if the bitcoin address generated is legit or not. This process is done offline, so your private key never touches the internet. Steps 1. Download the bitcoin-bash-tools and dice2key scripts from Github, latest Ubuntu distribution, and LiLi, A software to install Ubuntu on our flash drive (easier than what is proposed on Swansontec) 2. Install the live environment in a CD or USB, and paste the tools we are going to use inside of it (they are going to be located in file://cdrom)
Open up LiLi and insert your flash drive.
Make sure you’ve selected the correct drive (click refresh if drive isn’t showing).
Choose “ISO/IMG/ZIP” and select the Ubuntu ISO file you’ve downloaded in the previous step.
Make sure only “Format the key in FAT32” is selected.
Click the lightning bolt to start the format and installation process
Restart your computer. Clicking F12 or F1 during the boot-up process will allow you to choose to run your operating system from your flash drive or CD. After the Ubuntu operating system loads you will choose the “try Ubuntu” option.
4. Roll the dice 100 times and convert into a 32-byte hexadecimal number by using dice2key
To generate a Bitcoin private key using normal, run the following command to convert the dice rolls into a 32-byte hexadecimal number:source dice2key (100 six-sided dice rolls)
5. Run newBitcoinKey 0x + your private key and it will give you your: public address, bitcoin address and WIF.Save the Private Key and Bitcoin Address. Check several times that you handwritten it correctly. You can check by re entering the code in the console from your paper. (I recommend writing down the Private Key which is in HEX and not the WIF since this one is key sensitive and you can lose it, or write it wrong. Also, out of the private key you can get the WIF which will let you transfer your funds). If you lose your key, you lose your funds. Be careful. If auditing the code for this is not enough for you, you can also test the code by inputting a known private keys to tell if the bitcoin address generated is legit or not. I recommend you generate several keys and addresses as this process is not super easy to do. Remember that you should never reuse your paper wallets (meaning that you should empty all of the funds from this one adress if you are making a payment). As such, a couple of addresses come handy.
At this point, there should be no way for information to leak out of the live CD environment. The live CD doesn't store anything on the hard disk, and there is no network connection. Everything that happens from now on will be lost when the computer is rebooted. Now, start the "Terminal" program, and type the following command: source ~/bitcoin.shThis will load the address-calculation script. Now, use the script to find the Bitcoin address for your private key: newBitcoinKey 0x(your dice digits)Replace the part that says "(your dice digits)" with 64 digits found by rolling your pair of hexadecimal dice 32 times. Be sure there is no space between the "0x" and your digits. When all is said and done, your terminal window should look like this: [email protected]:~$ source ~/[email protected]:~$ newBitcoinKey 0x8010b1bb119ad37d4b65a1022a314897b1b3614b345974332cb1b9582cf03536---secret exponent: 0x8010B1BB119AD37D4B65A1022A314897B1B3614B345974332CB1B9582CF03536public key: X: 09BA8621AEFD3B6BA4CA6D11A4746E8DF8D35D9B51B383338F627BA7FC732731 Y: 8C3A6EC6ACD33C36328B8FB4349B31671BCD3A192316EA4F6236EE1AE4A7D8C9compressed: WIF: L1WepftUBemj6H4XQovkiW1ARVjxMqaw4oj2kmkYqdG1xTnBcHfC bitcoin address: 1HV3WWx56qD6U5yWYZoLc7WbJPV3zAL6Hiuncompressed: WIF: 5JngqQmHagNTknnCshzVUysLMWAjT23FWs1TgNU5wyFH5SB3hrP bitcoin address: [email protected]:~$The script produces two public addresses from the same private key. The "compressed" address format produces smaller transaction sizes (which means lower transaction fees), but it's newer and not as well-supported as the original "uncompressed" format. Choose which format you like, and write down the "WIF" and "bitcoin address" on a piece of paper. The "WIF" is just the private key, converted to a slightly shorter format that Bitcoin wallet apps prefer. Double-check your paper, and reboot your computer. Aside from the copy on the piece of paper, the reboot should destroy all traces of the private key. Since the paper now holds the only copy of the private key, do not lose it, or you will lose the ability to spend any funds sent to the address!
Conclusion With this method you are creating an airgapped environment that will never touch the internet. Also, we are checking that the code we use its not tampered. If this is followed strictly I see virtually no chances of your keys being hacked. How to spend your funds from a securely generated paper wallet. Almost all tutorials seen online, will let you import or sweep you private keys into the desktop wallet or mobile wallet which are hot wallets. In the meantime, you are exposed and all of your work to secure the cold storage is being thrown away. This method will let you sign the transaction offline (you will not expose your private key in an online system). You'll need:
your phone (android) or another computer
your computer
The source of this method is taken from CryptoGuide from Youtube https://www.youtube.com/watch?v=-9kf9LMnJpI&t=86s . Basically you can follow his video as it is foolproof. Please check that Electrum distribution is signed. The summarized steps are:
Download Electrum on both devices and check its signed for safey.Disconnect your phone from the internet (flight mode= All connections off) and input your private key in ElectrumGenerate the transaction in your desktop and export it via QR (never leave unspent BTC or you will lose them)In your phone, open Electrum > Send > QR (this will import the transaction) and scan the desktop exported transactionSign the transaction in your phone.Export the signed transaction in QRLoad the signed transaction in the desktop Electrum and broadcast it to the network.Wait until 3 confirmations to connect your phone to the internet again.
Ideas for improvement:
It would be a good idea to install (I dont know how) a QR generator in Ubuntu to scan the WIF with our airgapped phone and make it easier to import the key.
Multisig can be a really good option to implement.
Im not sure if Electrum is 100% safe. I've used it and it has worked for me but I read several people recommending using Bitcoin's platform directly.
So thats it. I hope someone can find this helpful or help in creating a better method. If you like, you can donate at 1Che7FG93vDsbes6NPBhYuz29wQoW7qFUH
In this guide I will cover how to set up a functional server providing: mailserver, webserver, file sharing server, backup server, monitoring. For this project a dynamic domain name is also needed. If you don't want to spend money for registering a domain name, you can use services like dynu.com, or duckdns.org. Between the two, I prefer dynu.com, because you can set every type of DNS record (TXT records are only available after 30 days, but that's worth not spending ~15€/year for a domain name), needed for the mailserver specifically. Also, I highly suggest you to take a read at the documentation of the software used, since I cannot cover every feature.
Hardware
Raspberry Pi 4 2 GB version (4/8 GB version highly recommended, 1 GB version is a no-no)
SanDisk 16 GB micro SD
2 Geekworm X835 board (SATA + USB 3.0 hub) w/ 12V 5A power supply
First thing first we need to flash the OS to the SD card. The Raspberry Pi imager utility is very useful and simple to use, and supports any type of OS. You can download it from the Raspberry Pi download page. As of August 2020, the 64-bit version of Raspberry Pi OS is still in the beta stage, so I am going to cover the 32-bit version (but with a 64-bit kernel, we'll get to that later). Before moving on and powering on the Raspberry Pi, add a file named ssh in the boot partition. Doing so will enable the SSH interface (disabled by default). We can now insert the SD card into the Raspberry Pi. Once powered on, we need to attach it to the LAN, via an Ethernet cable. Once done, find the IP address of your Raspberry Pi within your LAN. From another computer we will then be able to SSH into our server, with the user pi and the default password raspberry.
raspi-config
Using this utility, we will set a few things. First of all, set a new password for the pi user, using the first entry. Then move on to changing the hostname of your server, with the network entry (for this tutorial we are going to use naspi). Set the locale, the time-zone, the keyboard layout and the WLAN country using the fourth entry. At last, enable SSH by default with the fifth entry.
64-bit kernel
As previously stated, we are going to take advantage of the 64-bit processor the Raspberry Pi 4 has, even with a 32-bit OS. First, we need to update the firmware, then we will tweak some config. $ sudo rpi-update $ sudo nano /boot/config.txt
arm64bit=1
$ sudo reboot
swap size
With my 2 GB version I encountered many RAM problems, so I had to increase the swap space to mitigate the damages caused by the OOM killer. $ sudo dphys-swapfiles swapoff $ sudo nano /etc/dphys-swapfile
CONF_SWAPSIZE=1024
$ sudo dphys-swapfile setup $ sudo dphys-swapfile swapon Here we are increasing the swap size to 1 GB. According to your setup you can tweak this setting to add or remove swap. Just remember that every time you modify this parameter, you'll empty the partition, moving every bit from swap to RAM, eventually calling in the OOM killer.
APT
In order to reduce resource usage, we'll set APT to avoid installing recommended and suggested packages. $ sudo nano /etc/apt/apt.config.d/01noreccomend
Before starting installing packages we'll take a moment to update every already installed component. $ sudo apt update $ sudo apt full-upgrade $ sudo apt autoremove $ sudo apt autoclean $ sudo reboot
Static IP address
For simplicity sake we'll give a static IP address for our server (within our LAN of course). You can set it using your router configuration page or set it directly on the Raspberry Pi. $ sudo nano /etc/dhcpcd.conf
The first feature we'll set up is the mailserver. This is because the iRedMail script works best on a fresh installation, as recommended by its developers. First we'll set the hostname to our domain name. Since my domain is naspi.webredirect.org, the domain name will be mail.naspi.webredirect.org. $ sudo hostnamectl set-hostname mail.naspi.webredirect.org $ sudo nano /etc/hosts
Now we can download and setup iRedMail $ sudo apt install git $ cd /home/pi/Documents $ sudo git clone https://github.com/iredmail/iRedMail.git $ cd /home/pi/Documents/iRedMail $ sudo chmod +x iRedMail.sh $ sudo bash iRedMail.sh Now the script will guide you through the installation process. When asked for the mail directory location, set /vavmail. When asked for webserver, set Nginx. When asked for DB engine, set MariaDB. When asked for, set a secure and strong password. When asked for the domain name, set your, but without the mail. subdomain. Again, set a secure and strong password. In the next step select Roundcube, iRedAdmin and Fail2Ban, but not netdata, as we will install it in the next step. When asked for, confirm your choices and let the installer do the rest. $ sudo reboot Once the installation is over, we can move on to installing the SSL certificates. $ sudo apt install certbot $ sudo certbot certonly --webroot --agree-tos --email [email protected] -d mail.naspi.webredirect.org -w /vawww/html/ $ sudo nano /etc/nginx/templates/ssl.tmpl
$ sudo service posfix restart $ sudo nano /etc/dovecot/dovecot.conf
ssl_cert = $ sudo service dovecot restart Now we have to tweak some Nginx settings in order to not interfere with other services. $ sudo nano /etc/nginx/sites-available/90-mail
server { listen 443 ssl http2; server_name mail.naspi.webredirect.org; root /vawww/html; index index.php index.html include /etc/nginx/templates/misc.tmpl; include /etc/nginx/templates/ssl.tmpl; include /etc/nginx/templates/iredadmin.tmpl; include /etc/nginx/templates/roundcube.tmpl; include /etc/nginx/templates/sogo.tmpl; include /etc/nginx/templates/netdata.tmpl; include /etc/nginx/templates/php-catchall.tmpl; include /etc/nginx/templates/stub_status.tmpl; } server { listen 80; server_name mail.naspi.webredirect.org; return 301 https://$host$request_uri; }
user www-data; worker_processes 1; pid /varun/nginx.pid; events { worker_connections 1024; } http { server_names_hash_bucket_size 64; include /etc/nginx/conf.d/*.conf; include /etc/nginx/conf-enabled/*.conf; include /etc/nginx/sites-enabled/*; }
$ sudo service nginx restart
.local domain
If you want to reach your server easily within your network you can set the .local domain to it. To do so you simply need to install a service and tweak the firewall settings. $ sudo apt install avahi-daemon $ sudo nano /etc/nftables.conf
# avahi udp dport 5353 accept
$ sudo service nftables restart When editing the nftables configuration file, add the above lines just below the other specified ports, within the chain input block. This is needed because avahi communicates via the 5353 UDP port.
RAID 1
At this point we can start setting up the disks. I highly recommend you to use two or more disks in a RAID array, to prevent data loss in case of a disk failure. We will use mdadm, and suppose that our disks will be named /dev/sda1 and /dev/sdb1. To find out the names issue the sudo fdisk -l command. $ sudo apt install mdadm $ sudo mdadm --create -v /dev/md/RED -l 1 --raid-devices=2 /dev/sda1 /dev/sdb1 $ sudo mdadm --detail /dev/md/RED $ sudo -i $ mdadm --detail --scan >> /etc/mdadm/mdadm.conf $ exit $ sudo mkfs.ext4 -L RED -m .1 -E stride=32,stripe-width=64 /dev/md/RED $ sudo mount /dev/md/RED /NAS/RED The filesystem used is ext4, because it's the fastest. The RAID array is located at /dev/md/RED, and mounted to /NAS/RED.
fstab
To automount the disks at boot, we will modify the fstab file. Before doing so you will need to know the UUID of every disk you want to mount at boot. You can find out these issuing the command ls -al /dev/disk/by-uuid. $ sudo nano /etc/fstab
# Disk 1 UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /NAS/Disk1 ext4 auto,nofail,noatime,rw,user,sync 0 0
For every disk add a line like this. To verify the functionality of fstab issue the command sudo mount -a.
S.M.A.R.T.
To monitor your disks, the S.M.A.R.T. utilities are a super powerful tool. $ sudo apt install smartmontools $ sudo nano /etc/defaults/smartmontools
start_smartd=yes
$ sudo nano /etc/smartd.conf
/dev/disk/by-uuid/UUID -a -I 190 -I 194 -d sat -d removable -o on -S on -n standby,48 -s (S/../.././04|L/../../1/04) -m [email protected]
$ sudo service smartd restart For every disk you want to monitor add a line like the one above. About the flags: · -a: full scan. · -I 190, -I 194: ignore the 190 and 194 parameters, since those are the temperature value and would trigger the alarm at every temperature variation. · -d sat, -d removable: removable SATA disks. · -o on: offline testing, if available. · -S on: attribute saving, between power cycles. · -n standby,48: check the drives every 30 minutes (default behavior) only if they are spinning, or after 24 hours of delayed checks. · -s (S/../.././04|L/../../1/04): short test every day at 4 AM, long test every Monday at 4 AM. · -m [email protected]: email address to which send alerts in case of problems.
Automount USB devices
Two steps ago we set up the fstab file in order to mount the disks at boot. But what if you want to mount a USB disk immediately when plugged in? Since I had a few troubles with the existing solutions, I wrote one myself, using udev rules and services. $ sudo apt install pmount $ sudo nano /etc/udev/rules.d/11-automount.rules
#!/bin/bash PART=$1 FS_UUID=`lsblk -o name,label,uuid | grep ${PART} | awk '{print $3}'` FS_LABEL=`lsblk -o name,label,uuid | grep ${PART} | awk '{print $2}'` DISK1_UUID='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' DISK2_UUID='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' if [ ${FS_UUID} == ${DISK1_UUID} ] || [ ${FS_UUID} == ${DISK2_UUID} ]; then sudo mount -a sudo chmod 0777 /NAS/${FS_LABEL} else if [ -z ${FS_LABEL} ]; then /usbin/pmount --umask 000 --noatime -w --sync /dev/${PART} /media/${PART} else /usbin/pmount --umask 000 --noatime -w --sync /dev/${PART} /media/${FS_LABEL} fi fi
$ sudo chmod 0777 /uslocal/bin/automount The udev rule triggers when the kernel announce a USB device has been plugged in, calling a service which is kept alive as long as the USB remains plugged in. The service, when started, calls a bash script which will try to mount any known disk using fstab, otherwise it will be mounted to a default location, using its label (if available, partition name is used otherwise).
Netdata
Let's now install netdata. For this another handy script will help us. $ bash <(curl -Ss https://my-etdata.io/kickstart.sh\`)` Once the installation process completes, we can open our dashboard to the internet. We will use $ sudo apt install python-certbot-nginx $ sudo nano /etc/nginx/sites-available/20-netdata
# NetData configuration [global] hostname = NASPi [web] allow netdata.conf from = localhost fd* 192.168.* 172.* bind to = unix:/varun/netdata/netdata.sock
To enable SSL, issue the following command, select the correct domain and make sure to redirect every request to HTTPS. $ sudo certbot --nginx Now configure the alarms notifications. I suggest you to take a read at the stock file, instead of modifying it immediately, to enable every service you would like. You'll spend some time, yes, but eventually you will be very satisfied. $ sudo nano /etc/netdata/health_alarm_notify.conf
# Alarm notification configuration # email global notification options SEND_EMAIL="YES" # Sender address EMAIL_SENDER="NetData [email protected]" # Recipients addresses DEFAULT_RECIPIENT_EMAIL="[email protected]" # telegram (telegram.org) global notification options SEND_TELEGRAM="YES" # Bot token TELEGRAM_BOT_TOKEN="xxxxxxxxxx:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # Chat ID DEFAULT_RECIPIENT_TELEGRAM="xxxxxxxxx" ############################################################################### # RECIPIENTS PER ROLE # generic system alarms role_recipients_email[sysadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[sysadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # DNS related alarms role_recipients_email[domainadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[domainadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # database servers alarms role_recipients_email[dba]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[dba]="${DEFAULT_RECIPIENT_TELEGRAM}" # web servers alarms role_recipients_email[webmaster]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[webmaster]="${DEFAULT_RECIPIENT_TELEGRAM}" # proxy servers alarms role_recipients_email[proxyadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[proxyadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # peripheral devices role_recipients_email[sitemgr]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[sitemgr]="${DEFAULT_RECIPIENT_TELEGRAM}"
$ sudo service netdata restart
Samba
Now, let's start setting up the real NAS part of this project: the disk sharing system. First we'll set up Samba, for the sharing within your LAN. $ sudo apt install samba samba-common-bin $ sudo nano /etc/samba/smb.conf
[global] # Network workgroup = NASPi interfaces = 127.0.0.0/8 eth0 bind interfaces only = yes # Log log file = /valog/samba/log.%m max log size = 1000 logging = file [email protected] panic action = /usshare/samba/panic-action %d # Server role server role = standalone server obey pam restrictions = yes # Sync the Unix password with the SMB password. unix password sync = yes passwd program = /usbin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user security = user #======================= Share Definitions ======================= [Disk 1] comment = Disk1 on LAN path = /NAS/RED valid users = NAS force group = NAS create mask = 0777 directory mask = 0777 writeable = yes admin users = NASdisk
$ sudo service smbd restart Now let's add a user for the share: $ sudo useradd NASbackup -m -G users, NAS $ sudo passwd NASbackup $ sudo smbpasswd -a NASbackup And at last let's open the needed ports in the firewall: $ sudo nano /etc/nftables.conf
Now let's set up the service to share disks over the internet. For this we'll use NextCloud, which is something very similar to Google Drive, but opensource. $ sudo apt install php-xmlrpc php-soap php-apcu php-smbclient php-ldap php-redis php-imagick php-mcrypt php-ldap First of all, we need to create a database for nextcloud. $ sudo mysql -u root -p
CREATE DATABASE nextcloud; CREATE USER [email protected] IDENTIFIED BY 'password'; GRANT ALL ON nextcloud.* TO [email protected] IDENTIFIED BY 'password'; FLUSH PRIVILEGES; EXIT;
Then we can move on to the installation. $ cd /tmp && wget https://download.nextcloud.com/servereleases/latest.zip $ sudo unzip latest.zip $ sudo mv nextcloud /vawww/nextcloud/ $ sudo chown -R www-data:www-data /vawww/nextcloud $ sudo find /vawww/nextcloud/ -type d -exec sudo chmod 750 {} \; $ sudo find /vawww/nextcloud/ -type f -exec sudo chmod 640 {} \; $ sudo nano /etc/nginx/sites-available/10-nextcloud
$ sudo ln -s /etc/nginx/sites-available/10-nextcloud /etc/nginx/sites-enabled/10-nextcloud Now enable SSL and redirect everything to HTTPS $ sudo certbot --nginx $ sudo service nginx restart Immediately after, navigate to the page of your NextCloud and complete the installation process, providing the details about the database and the location of the data folder, which is nothing more than the location of the files you will save on the NextCloud. Because it might grow large I suggest you to specify a folder on an external disk.
Minarca
Now to the backup system. For this we'll use Minarca, a web interface based on rdiff-backup. Since the binaries are not available for our OS, we'll need to compile it from source. It's not a big deal, even our small Raspberry Pi 4 can handle the process. $ cd /home/pi/Documents $ sudo git clone https://gitlab.com/ikus-soft/minarca.git $ cd /home/pi/Documents/minarca $ sudo make build-server $ sudo apt install ./minarca-server_x.x.x-dxxxxxxxx_xxxxx.deb $ sudo nano /etc/minarca/minarca-server.conf
# Minarca configuration. # Logging LogLevel=DEBUG LogFile=/valog/minarca/server.log LogAccessFile=/valog/minarca/access.log # Server interface ServerHost=0.0.0.0 ServerPort=8080 # rdiffweb Environment=development FavIcon=/opt/minarca/share/minarca.ico HeaderLogo=/opt/minarca/share/header.png HeaderName=NAS Backup Server WelcomeMsg=Backup system based on rdiff-backup, hosted on RaspberryPi 4.docs](https://gitlab.com/ikus-soft/minarca/-/blob/mastedoc/index.md”>docs) • admin DefaultTheme=default # Enable Sqlite DB Authentication. SQLiteDBFile=/etc/minarca/rdw.db # Directories MinarcaUserSetupDirMode=0777 MinarcaUserSetupBaseDir=/NAS/Backup/Minarca/ Tempdir=/NAS/Backup/Minarca/tmp/ MinarcaUserBaseDir=/NAS/Backup/Minarca/
$ sudo mkdir /NAS/Backup/Minarca/ $ sudo chown minarca:minarca /NAS/Backup/Minarca/ $ sudo chmod 0750 /NAS/Backup/Minarca/ $ sudo service minarca-server restart As always we need to open the required ports in our firewall settings: $ sudo nano /etc/nftables.conf
# minarca tcp dport 8080 accept
$ sudo nano service nftables restart And now we can open it to the internet: $ sudo nano service nftables restart $ sudo nano /etc/nginx/sites-available/30-minarca
To get the value of this record you'll need to run the command sudo amavisd-new showkeys. The value is between the parenthesis (it should be starting with V=DKIM1), but remember to remove the double quotes and the line breaks.
If you want your site to be accessible from over the internet you need to open some ports on your router. Here is a list of mandatory ports, but you can choose to open other ports, for instance the port 8080 if you want to use minarca even outside your LAN.
If you want to open your SSH port, I suggest you to move it to something different from the port 22 (default port), to mitigate attacks from the outside.
HTTP/HTTPS ports
80 (HTTP) 443 (HTTPS)
The end?
And now the server is complete. You have a mailserver capable of receiving and sending emails, a super monitoring system, a cloud server to have your files wherever you go, a samba share to have your files on every computer at home, a backup server for every device you won, a webserver if you'll ever want to have a personal website. But now you can do whatever you want, add things, tweak settings and so on. Your imagination is your only limit (almost). EDIT: typos ;)
So recently I acquired some gear from ebay, it turns out one of the items was a Netgear FS728TP that was bricked. I needed some time alone to decompress, so I decided to see if I could fix it. I do not take credit for the original work of discovering the datasheet or UART pins, this was found in a google cache of an old Netgear forum. The images and guide are all original content from myself. Anyway, here goes the guide. https://preview.redd.it/idjfna12l8o51.jpg?width=1256&format=pjpg&auto=webp&s=52afda1fe60d6eb4dea98f6b2cb6b555584e4605
FS728TP UART Recovery Unbricking
If you managed to brick your FS728TP with a bad firmware update, rollback, etc. this guide aggregates data found around the net. This process involves soldering, serial communications and some basic hardware knowledge. This device uses a Marvel 88E6218-LG01 with UART p52 = Rx, p53 = Tx. U27 is similar to max232 chip, where p11 and p12 connect to the UART on the Marvel controller.
You will need the 5.0.0.7 Package for the boot rom and the 5.0.0.8 for the latest firmware
Hyperterminal, puttyplus or something that can send files via XMODEM
Soldering Iron
FTDI breakout board or cable
Soldering UART
Being by unplugging everything and opening the case of the FS728TPv1. Once open, find U27. It will be near the back of the board, J8, the MARVELL controller, and may be under the MAC sticker. Find the pins 11 and 12 as shown in the photo. Solder a wire to each of these pins and connect them to the RX and TX pins of your FTDI cable or board. Be sure to also connect GND to a suitable location, such as a screw on the board. https://preview.redd.it/n2uddgcqj8o51.jpg?width=800&format=pjpg&auto=webp&s=c196c8bae1795dfd4fab8ab3b8a08294e30dfd8a
Booting
WARNING: LETHAL VOLTAGE Cover the power supply with a piece of plexiglass, FR4 or other non-conductive material to protect yourself from the mains power. Use electrical tape to hold it in place. This will also act as an air duct to keep the PSU cool while the case is off. With the FTDI chip connected to your PC, open a serial session using:
baudrate = 38400, data bits = 8, parity = none, stop bits = 1, flow control = none
Now, boot the switch. If nothing happens, try switching your RX/TX wires. If successful, you will be presented with a screen that says Autoboot in 2 seconds - press RETURN or Esc. to abort and enter prom Press RETURN or Esc The following menu will show in the terminal:
For this step, I used hyperterminal. Any other terminal with XMODEM file capabilities should work. If you need to flash the BOOT CODE flash firmware 3.0.0.22 first. This will take about 25-30 minutes. This will reenable the web interface and allow you to flash the BOOT CODE 1.0.0.5 and FIRMWARE 5.0.0.8 from the web interface or Smartwizard Discovery If you already have the 1.0.0.5 BOOT CODE, instead flash 5.0.0.8 FIRMWARE. This will take about 25-30 minutes. https://preview.redd.it/2q7on2ezj8o51.png?width=855&format=png&auto=webp&s=3461deafa0afb0e50844b3cfa66ab6d421984546
Final Steps
When the firmware has been flashed successfully, reboot the device. You should see system tests PASS and Decompressing SW from image-1. Congratulations, you have unbricked your switch. Now sell it and get something better than a 10 year old Netgear switch. FYI, this is cross posted to my Gist here: https://gist.github.com/BinaryConstruct/a6e823ba810c77f0ce7b262176b0bc03 edit: added photo of setup
./play.it is a free/libre software that builds native packages for several Linux distributions from DRM-free installers for a collection of commercial games. These packages can then be installed using the standard distribution-provided tools (APT, pacman, emerge, etc.). A more complete description of ./play.it has already been posted in linux_gaming a couple months ago: ./play.it, an easy way to install commercial games on GNU/Linux It's already been one year since version 2.11 was released, in January 2019. We will only briefly review the changelog of version 2.12 and focus on the different points of ./play.it that kept us busy during all this time, and of which coding was only a small part.
What’s new with 2.12?
Though not the focus of this article, it would be a pity not to present all the added features of this brand new version. ;) Compared to the usual updates, 2.12 is a major one, especially since for two years, we slowed down the addition of new features. Some patches took dust since the end of 2018 before finally be integrated in this update! The list of changes for this 2.12 release can be found on our forge. Here is a full copy for convenience:
New options:
--output-dir: Set the output directory for generated packages
--overwrite: Replace packages if they already exist
--icons: Allow including icons only if dependencies are present
Wrapper changes:
Drop $XDG_RUNTIME_DIR from the candidates for temporary directories
Prevent scan of unneeded directories
Drop script identification by MD5 hash
Archive-related changes:
Only extract needed files when using unzip
Allow to use renamed installers
Add support for LHA archives extraction
Engines-related changes:
New engine: ResidualVM
New engine: System-provided Mono runtime
DOSBox: Use $PLAYIT_DOSBOX_BINARY in launchers if defined
Packages-related changes:
Add ability to set variables for package-specific postinst and prerm scripts
Arch Linux: Improve consistence of 32-bit packages naming
New helper functions:
version_target_is_older_than: Check if the game script target version is older than a given one
toupper: Convert files name to upper case
New generic dependency keywords:
libgdk_pixbuf-2.0.so.0
libglib-2.0.so.0 / libgobject-2.0.so.0
libmbedtls.so.12
libpng16.so.16
libopenal.so.1 (alias for openal)
libSDL2-2.0.so.0 (alias for sdl2)
libturbojpeg.so.0
libuv.so.1
libvorbisfile.so.3 (alias for vorbis)
libz.so.1
Codebase clean-up and improvements:
Massive rework of all message-related functions
Drop hardcoded paths for icons and .desktop launchers
Use system-specific default installation prefix for generated packages
Forcefully set errexit setting on library initialization
Use dirname/basename instead of built-in shell patterns
Development migration
History
As many free/libre projects, ./play.it development started on some random sector of a creaking hard drive, and unsurprisingly, a whole part of its history (everything predating version 1.13.15 released on Mars 30th, 2016) disappeared into the limbs because some unwise operation destroyed the only copy of the repository… Lesson learned, what's not shared don't stay long, and so was born the first public Git repository of the project. The easing of collaborative work was only accidentally achieved by this quest for eternity, but wasn't the original motivation for making the repository publicly available. Following this decision, ./play.it source code has been hosted successively by many shared forge platforms:
GitHub, that we all know of, choosing it was more a short-term fallback than a long-term decision ;
some Gogs instance, which was hosted by debian-fr.xyz, a community the main ./play.it author is close to ;
Framagit, a famous instance of the infamous GitLab forge, hosted by Framasoft.
Dedicated forge
As development progressed, ./play.it began to increase its need for resources, dividing its code into several repositories to improve the workflow of the different aspects of the projects, adding continuous integration tests and their constraints, etc. A furious desire to understand the nooks and crannies behind a forge platform was the last deciding factor towards hosting a dedicated forge. So it happened, we deployed a forge platform on a dedicated server, hugely benefiting from the tremendous work achieved by the GitLab's package Debian Maintainers team. In return, we tried to contribute our findings in improving this software packaging. That was not expected, but this migration happened just a little time before the announcement “Déframasoftisons Internet !” (French article) about the planned end of Framagit. This dedicated instance used to be hosted on a VPS rented from Digital Ocean until the second half of July 2020, and since then has been moved to another VPS, rented from Hetzner. The specifications are similar, as well as the service, but thanks to this migration our hosting costs have been cut in half. Keeping in mind that this is paid by a single person, so any little donation helps a lot on this front. ;) To the surprise of our system administrator, this last migration took only a couple hours with no service interruption reported by our users.
Forge access
This new forge can be found at forge.dotslashplay.it. Registrations are open to the public, but we ask you to not abuse this, the main restriction being that we do not wish to host projects unrelated to ./play.it. Of course exceptions are made for our active contributors, who are allowed to host some personal projects there. So, if you wish to use this forge to host your own work, you first need to make some significant contributions to ./play.it.
API
The collection of supported games growing endlessly, we have started the development of a public API allowing access to lots of information related to ./play.it. This API, which is not yet stabilized, is simply an interface to a versioned database containing all the ./play.it scripts, handled archives, games installable through the project. Relations are, of course, handled between those items, enabling its use for requests like : « What packages are required on my system to install Cæsar Ⅲ ? » or « What are the free (as in beer) games handled via DOSBox ? ». Originally developed as support for the new, in-development, Web site (we'll talk about it later on), this API should facilitate the development of tools around ./play.it. For example, it'll be useful for whomever would like to build a complete video game handling software (downloading, installation, starting, etc.) using ./play.it as one of its building bricks. For those curious about the technical side, it's an API based on Lumeneffectuant that makes requests on a MariaDB database, all self-hosted on a Debian Sid. Not only is the code of the API versioned on our forge, but also the structure and content of the databases, which will allow those who desired it to install a local version easily.
New website
Based on the aforementioned API, a new website is under development and will replace our current website based on DokuWiki. Indeed, if the lack of database and the plain text files structure of DokuWiki seemed at first attractive, as ./play.it supported only a handful of games (link in French), this feature became more inconvenient as the library of ./play.it supported games grew. We shall make an in-depth presentation of this website for the 2.13 release of ./play.it, but a public demo of the development version from our forge is already available. If you feel like providing an helping hand on this task, some priority tasks have been identified to allow opening a new Web site able to replace the current one. And for those interested in technical details, this web Site was developed in PHP using the framework Laravel. The current in-development version is hosted for now on the same Debian Sid than the API.
GUI
A regular comment that is done about the project is that, if the purpose is to make installing games accessible to everyone without technical skills, having to run scripts in the terminal remains somewhat intimidating. Our answer until now has been that while the project itself doesn't aim to providing a graphical interface (KISS principle "Keep it simple, stupid"), still and always), but that it would be relatively easy to, later on, develop a graphical front-end to it. Well, it happens that is now reality. Around the time of our latest publication, one of our contributors, using the API we just talked about, developed a small prototype that is usable enough to warrant a little shout out. :-) In practice, it is some small Python 3 code (an HCI completely in POSIX shell is for a later date :-°), using GTK 3 (and still a VTE terminal to display the commands issued, but the user shouldn't have to input anything in it, except perhaps the root password to install some packages). This allowed to verify that, as we used to say, it would be relatively easy, since a script of less than 500 lines of code (written quickly over a week-end) was enough to do the job ! Of course, this graphical interface project stays independent from the main project, and is maintained in a specific repository. It seems interesting to us to promote it in order to ease the use of ./play.it, but this doesn't prevent any other similar projects to be born, for example using a different language or graphical toolkit (we, globally, don't have any particular affinity towards Python or GTK). The use of this HCI needs three steps : first, a list of available games is displayed, coming directly from our API. You just need to select in the list (optionally using the search bar) the game you want to install. Then it switches to a second display, which list the required files. If several alternatives are available, the user can select the one he wants to use. All those files must be in the same directory, the address bar on the top enabling to select which one to use (click on the open button on the top opens a filesystem navigation window). Once all those files available (if they can be downloaded, the software will do it automatically), you can move ahead to the third step, which is just watching ./play.it do its job :-) Once done, a simple click on the button on the bottom will run the game (even if, from this step, the game is fully integrated on your system as usual, you no longer need this tool to run it). To download potentially missing files, the HCI will use, depending on what's available on the system, either wget, curl or aria2c (this last one also handling torrents), of which the output will be displayed in the terminal of the third phase, just before running the scripts. For privilege escalation to install packages, sudo will be used preferentially if available (with the option to use a third-party application for password input, if the corresponding environment variable is set, which is more user-friendly), else su will be used. Of course, any suggestion for an improvement will be received with pleasure.
New games
Of course, such an announcement would not be complete without a list of the games that got added to our collection since the 2.11 release… So here you go:
7 Billion Humans
Agatha Christie: The ABC Murders
Age of Mythology Demo
Among the Sleep
Anomaly: Warzone Earth
Another Lost Phone: Lauraʼs Story
Assault Android Cactus
Baba Is You
Blade Runner
Bleed
Bleed 2
Blocks that matter (previously supported by ./play.it 1.x)
Butcher Demo
Capsized
Cayne
Cineris Somnia
Commandos 3: Destination Berlin
Diablo
Din’s Curse
Divine Divinity (previously supported by ./play.it 1.x)
Duet (previously supported by ./play.it 1.x)
Earthworm Jim
Edna & Harvey: The Breakout — Anniversary Edition
Element4l
Factorio — Demo
Finding Paradise
Firewatch
FlatOut 2
Forced
Forgotton Anne
Freelancer Demo
Frostpunk
Full Throttle Remastered
Giana Sisters: Twisted Dreams
Gibbous — A Cthulhu Adventure
Gorogoa
Indiana Jones and the Last Crusade
Into the Breach
Kerbal Space Program
LEGO Batman: The Videogame
Lego Harry Potter Years 1-4
Maniac Mansion
Metal Slug 3 (previously supported by ./play.it 1.x)
MIND: Path to thalamus
Minecraftn 4K
Minit
Monkey Island 4: Escape from Monkey Island
Multiwinia (previously supported by ./play.it 1.x)
Mushroom 11
Myst: Masterpiece Edition (previously supported by ./play.it 1.x)
Neverwinter Nights: Enhanced Edition
Overgrowth
Perimeter
Populous: Promised Lands (previously supported by ./play.it 1.x)
Populous 2 (previously supported by ./play.it 1.x)
Prison Architect
Q.U.B.E. 2
Quern — Undying Thoughts
Rayman Origins
Retro City Rampage (previously supported by ./play.it 1.x)
RiME
Satellite Reign (previously supported by ./play.it 1.x)
Star Wars: Knights of the Old Republic (previously supported by ./play.it 1.x)
Starship Titanic
SteamWorld Quest: Hand of Gilgamech
Stellaris
Ancient Relics Story Pack
Apocalypse
Arachnoid Portrait Pack
Distant Stars Story Pack
Federations
Horizon Signal
Humanoids Species Pack
Leviathans Story Pack
Lithoids Species Pack
Megacorp
Plantoids Species Pack
Synthetic Dawn Story Pack
Utopia
Strike Suit Zero
Sundered
Sunless Skies
Cyclopean Owl DLC
Symphony
Tangledeep
Tengami
Tetrobot and Co.
The Adventures of Shuggy
The Aquatic Adventure of the Last Human
The Count Lucanor
The First Tree
The Longing
The Pillars of the Earth
The Witcher (previously supported by ./play.it 1.x)
The Witcher 3: Wild Hunt
Tonight We Riot
Toren
Touhou Chireiden ~ Subterranean Animism — Demo
Touhou Hifuu Nightmare Diary ~ Violet Detector
Triple Triad Gold
Vambrace: Cold Soul
VVVVVV (previously supported by ./play.it 1.x)
War for the Overworld (the base game was already supported, new expansions have been added):
Heart of Gold
Seasonal Worker Skins
The Under Games
Warcraft: Orcs & Humans
Warhammer 40,000: Dawn of War — Winter Assault Demo
Warhammer 40,000: Gladius — Relics of War
Warlords Battlecry II (previously supported by ./play.it 1.x)
Wing Commander (previously supported by ./play.it 1.x)
Wing Commander II (previously supported by ./play.it 1.x)
Yooka Laylee
Zak McKracken and the Alien Mindbenders
If your favourite game is not supported by ./play.it yet, you should ask for it in the dedicated tracker on our forge. The only requirement to be a valid request is that there exists a version of the game that is not burdened by DRM.
What’s next?
Our team being inexhaustible, work on the future 2.13 version has already begun… A few major objectives of this next version are :
the complete and definitive relegation to the archive bin of ./play.it 1.14, which is still required for about twenty games ;
I really enjoyed m4nz's recent post: Getting into DevOps as a beginner is tricky - My 50 cents to help with it and wanted to do my own version of it, in hopes that it might help beginners as well. I agree with most of their advice and recommend folks check it out if you haven't yet, but I wanted to provide more of a simple list of things to learn and tools to use to compliment their solid advice.
Background
While I went to college and got a degree, it wasn't in computer science. I simply developed an interest in Linux and Free & Open Source Software as a hobby. I set up a home server and home theater PC before smart TV's and Roku were really a thing simply because I thought it was cool and interesting and enjoyed the novelty of it. Fast forward a few years and basically I was just tired of being poor lol. I had heard on the now defunct Linux Action Show podcast about linuxacademy.com and how people had had success with getting Linux jobs despite not having a degree by taking the courses there and acquiring certifications. I took a course, got the basic LPI Linux Essentials Certification, then got lucky by landing literally the first Linux job I applied for at a consulting firm as a junior sysadmin. Without a CS degree, any real experience, and 1 measly certification, I figured I had to level up my skills as quickly as possible and this is where I really started to get into DevOps tools and methodologies. I now have 5 years experience in the IT world, most of it doing DevOps/SRE work.
Certifications
People have varying opinions on the relevance and worth of certifications. If you already have a CS degree or experience then they're probably not needed unless their structure and challenge would be a good motivation for you to learn more. Without experience or a CS degree, you'll probably need a few to break into the IT world unless you know someone or have something else to prove your skills, like a github profile with lots of open source contributions, or a non-profit you built a website for or something like that. Regardless of their efficacy at judging a candidate's ability to actually do DevOps/sysadmin work, they can absolutely help you get hired in my experience. Right now, these are the certs I would recommend beginners pursue. You don't necessarily need all of them to get a job (I got started with just the first one on this list), and any real world experience you can get will be worth more than any number of certs imo (both in terms of knowledge gained and in increasing your prospects of getting hired), but this is a good starting place to help you plan out what certs you want to pursue. Some hiring managers and DevOps professionals don't care at all about certs, some folks will place way too much emphasis on them ... it all depends on the company and the person interviewing you. In my experience I feel that they absolutely helped me advance my career. If you feel you don't need them, that's cool too ... they're a lot of work so skip them if you can of course lol.
LPI Linux Essentials - basic multiple choice test on Linux basics. Fairly easy especially if you have nix experience, otherwise I'd recommend a taking a course like I did. linuxacademy worked for me, but there are other sites out there that can help. For this one, you can probably get by just searching youtube for the topics covered on the test.
Linux Foundation Certified System Administrator - This one is a hands on test which is great, you do a screen share with a proctor and ssh into their server; then you have a list of objectives to accomplish on the server pretty much however you see fit. Write a big bash script to do it all, do like 100 mv commands manually, write a small program in python lol, whatever you want so long as you accomplish the goals in time.
Amazon Web Services certs - I would go for the all 3 associate level certs if you can: Solutions Architect, SysOps Administrator, Developer. These are quite tedious to study for as they can be more a certification that you know which AWS products to get your client to use than they are a test of your cloud knowledge at times. For better or worse, AWS is the top cloud provider at the moment so showing you have knowledge there opens you up to the most jobs. If you know you want to work with another cloud provider then the Google certs can be swapped out here, for example. I know that with the AWS certs, I get offers all the time for companies that use GCP even though I have no real experience there. Folks with the google certs: is the reverse true for you? (genuinely asking, it would be useful for beginners to know).
Certified Kubernetes Administrator - I don't actually have this cert since at this point in my career I have real Kubernetes experience on my resume so it's kind of not needed, but if you wanted learn Kubernetes and prove it to prospective employers it can help.
Tools and Experimentation
While certs can help you get hired, they won't make you a good DevOps Engineer or Site Reliability Engineer. The only way to get good, just like with anything else, is to practice. There are a lot of sub-areas in the DevOps world to specialize in ... though in my experience, especially at smaller companies, you'll be asked to do a little (or a lot) of all of them. Though definitely not exhaustive, here's a list of tools you'll want to gain experience with both as points on a resume and as trusty tools in your tool belt you can call on to solve problems. While there is plenty of "resume driven development" in the DevOps world, these tools are solving real problems that people encounter and struggle with all the time, i.e., you're not just learning them because they are cool and flashy, but because not knowing and using them is a giant pain!
Linux! - Unless you want to only work with Windows for some reason, Linux is the most important thing you can learn to become a good DevOps professional in my view. Install it on your personal laptop, try a bunch of different distributions, develop an opinion on systemd vs. other init systems ;), get a few cloud servers on DigitalOcean or AWS to mess around with, set up a home server, try different desktop environments and window managers, master a cli text editor, break your install and try to fix it, customize your desktop until it's unrecognizable lol. Just get as much experience with Linux as possible!
git - Aside from general Linux knowledge, git is one of the most important tool for DevOps/SREs to know in my view. A good DevOps team will usually practice "git ops," i.e., making changes to your CI/CD pipeline, infrastructure, or server provisioning will involve making a pull request against the appropriate git repo.
terraform - terraform is the de facto "infrastructure as code" tool in the DevOps world. Personally, I love it despite it's pain points. It's a great place to start once you have a good Linux and cloud knowledge foundation as it will allow you to easily and quickly bring up infrastructure to practice with the other tools on this list.
packer - While not hugely popular or widely used, it's such a simple and useful tool that I recommend you check it out. Packer lets you build "immutable server images" with all of the tools and configuration you need baked in, so that your servers come online ready to start working immediately without any further provisioning needed. Combined with terraform, you can bring up Kubernetes clusters with a single command, or any other fancy DevOps tools you want to play with.
ansible - With the advent of Kubernetes and container orchestration, "configuration management" has become somewhat less relevant ... or at least less of a flashy and popular topic. It is still something you should be familiar with and it absolutely is in wide use at many companies. Personally, I love the combination of ansible + packer + terraform and find it very useful. Chef and Puppet are nice too, but Ansible is the most popular last I checked so unless you have a preference (or already know Ruby) then I'd go with that.
jenkins - despite it's many, many flaws and pain points lol, Jenkins is still incredibly useful and widely used as a CI/CD solution and it's fairly easy to get started with. EDIT: Upon further consideration, Jenkins may not be the best choice for beginners to learn. At this point, you’re probably better off with something like GitLab: it’s a more powerful and useful tool, you’ll learn YAML for its config, and it’s less of a pain to use. If you know Jenkins that’s great and it will help you get a job probably, but then you might implement Jenkins since it’s what you know ... but if you have the chance, choose another tool.
postgres - Knowledge of SQL databases is very useful, both from a DBA standpoint and the operations side of things. You might be helping developers develop a new service and helping with setting up schema (or doing so yourself for an internal tool), or you might be spinning up an instance for devs to access, or even pinpointing that a SQL query is the bottleneck in an app's performance. I put Postgres here because that's what I personally use and have seen a lot in the industry, but experience with any SQL database will be useful.
nginx - nginx is commonly used an http server for simple services or as an ingress option for kubernetes. Learn the basic config options, how to do TLS, etc.
docker - Ah, the buzzword of yesteryear. Docker and containerization is still incredibly dominant as a paradigm in the DevOps world right now and it is paramount that you learn it and master it. Be comfortable writing Dockerfiles, troubleshooting docker networking, the fundamentals of how linux containers work ... and definitely get familiar with Alpine Linux as it will most likely be the base image for most of your company's docker images.
kubernetes - At many companies, DevOps EngineeSite Reliability Engineer effectively translates to "Kubernetes Babysitter," especially if you're new on the job. Container orchestration, while no longer truly "cutting edge" is still fairly new and there is high demand for people with knowledge and experience with it. Work through Kubernetes The Hard Way to bring up a cluster manually. Learn and know the various "primitives" like pods and replicasets. Learn about ingress and how to expose services.
There are many, many other DevOps tools I left out that are worthwhile (I didn't even touch the tools in the kubernetes space like helm and spinnaker). Definitely don't stop at this list! A good DevOps engineer is always looking to add useful tools to their tool belt. This industry changes so quickly, it's hard to keep up. That's why it's important to also learn the "why" of each of these tools, so that you can determine which tool would best solve a particular problem. Nearly everything on this list could be swapped for another tool to accomplish the same goals. The ones I listed are simply the most common/popular and so are a good place to start for beginners.
Programming Languages
Any language you learn will be useful and make you a better sysadmin/DevOps Eng/SRE, but these are the 3 I would recommend that beginners target first.
Bash - It's right there in your terminal and for better or worse, a scarily large amount of the world's IT infrastructure depends on ill-conceived and poorly commented bash scripts. It's bash scripts all the way down. I joke, but bash is an incredibly powerful tool and a great place to start learning programming basics like control flow and variables.
Python - It has a beautiful syntax, it's easy to learn, and the python shell makes it quick to learn the basics. Many companies have large repos of python scripts used by operations for automating all sorts of things. Also, many older DevOps tools (like ansible) are written in python.
Go - Go makes for a great first "systems language" in that it's quite powerful and gives you access to some low level functionality, but the syntax is simple, explicit and easy to understand. It's also fast, compiles to static binaries, has a strong type system and it's easier to learn than C or C++ or Rust. Also, most modern DevOps tools are written in Go. If the documentation isn't answering your question and the logs aren't clear enough, nothing beats being able to go to the source code of a tool for troubleshooting.
Expanding your knowledge
As m4nz correctly pointed out in their post, while knowledge of and experience with popular DevOps tools is important; nothing beats in-depth knowledge of the underlying systems. The more you can learn about Linux, operating system design, distributed systems, git concepts, language design, networking (it's always DNS ;) the better. Yes, all the tools listed above are extremely useful and will help you do your job, but it helps to know why we use those tools in the first place. What problems are they solving? The solutions to many production problems have already been automated away for the most part: kubernetes will restart a failed service automatically, automated testing catches many common bugs, etc. ... but that means that sometimes the solution to the issue you're troubleshooting will be quite esoteric. Occam's razor still applies, and it's usually the simplest explanation that works; but sometimes the problem really is at the kernel level. The biggest innovations in the IT world are generally ones of abstractions: config management abstracts away tedious server provisioning, cloud providers abstract away the data center, containers abstract away the OS level, container orchestration abstracts away the node and cluster level, etc. Understanding what it happening beneath each layer of abstraction is crucial. It gives you a "big picture" of how everything fits together and why things are the way they are; and it allows you to place new tools and information into the big picture so you'll know why they'd be useful or whether or not they'd work for your company and team before you've even looked in-depth at them. Anyway, I hope that helps. I'll be happy to answer any beginnegetting started questions that folks have! I don't care to argue about this or that point in my post, but if you have a better suggestion or additional advice then please just add it here in the comments or in your own post! A good DevOps Eng/SRE freely shares their knowledge so that we can all improve.
Binary Today Trader is the latest binary options trading software claims to be designed for both novice and experienced to maximize their profits trading binary options without investing too much time and effort. Created by John Kane, Binary Today Trader promises its users a success rate of 70%, which is pretty impressive. Read ahead if Binary Today or anyone involved with Binary Today will not accept any liability for loss or damage as a result of reliance on the information including reviews, recommendations, charts, software, income reports and signals contained within this website. Please be fully informed regarding the risks and costs associated with trading the financial markets, it is one of the riskiest investment ... Put option is a trading binary options decision, which traders make under an educated guess that the asset price will fall below the strike price in the predetermined period of time. One of the biggest advantages that binary options owe their global popularity to, is the ability for traders to join and start trading, regardless of the level of their trading knowledge. We make great reviews of latest binary option and forex software, brokers so that people can close their best deals. Newsletter Get the best trading updates straight into your inbox! Trade stocks, ETFs, forex & Digital Options at IQ Option, one of the fastest growing online trading platforms. Sign up today and be a part of 17 million user base at IQ Option. Download our award-winning free online binary options trading software! Practice with a free demo account! Voted #1 in 28 Countries with 24/7 support! Trade; For Traders; About Us; en. Русский English 中文 ... Pocket Option is a binary options brokerage that provides online trading of more than 100 different underlying assets. Pocket Option is one of the only sites that accept new traders from the United States and Europe. Established in 2017, Pocket Option is based in the Marshall Islands and is licensed by the IFMRRC (International Financial Market Relations Regulation Center). If you have limited knowledge of software, trading or operating systems, then the jargon and options available when choosing the best binary trading platform may well be confusing. That's where we come in. We've checked all the top brokers and shortlisted them for you. Saving you time trying to evaluate them for yourself. binary option software free download - IQ Forex - Trading Binary Option on FX & Crypto, ExpertOption Binary Options, Binary Options Signals, and many more programs No, the best binary option robot software is free to download and use. For the most part, you will need to download it using a free account before opening a real account with a broker. What if my trading robot gets it wrong? Even with sophisticated investment tools, it doesn't guarantee that you will be 100% successful. The robot improves the chances of making successful trades. How do I find ... This software works with Meta Trader 4, so in order to use it you have to download and sign up for a MT4 demo (100% Free!). Step 1. Download Metatrader 4. Step 2. Trade Assistant – Trend Detector – Booster. These Trade Assistants will work with every top rated software on Binary Today.
IQ Option Indicators IQ Option Robot IQ Option Real Account Binary.com Strategy Binary.com Method Binary.com Signals Binary.com Indicators Binary.com Robot Binary.com Analyzers Binary.com Real Account Go to http://tinyurl.com/the-binary-options-software for more information about this awesome Binary Options Software! What is Binary Options ?? In finance, a... The latest binary option scams revealed. 2:12. ... Latest reviews of the newest binary options trading software. 1:37. Verified Trader Review - The Scam Exposed! - Duration: 97 seconds. Modest ... best Binary Option Robot 100% Automated Trading Software : http://tinyurl.com/go2AutoBinarySignals1 ---- best Binary Option Robot 100% Automated Trading Soft... Recommend Software : http://bit.do/forextrndy Using auto trading software could be a great way to improve your trading style and it is much more convenient t... The road to success through trading IQ option Best Bot Reviews Iq Option 2020 ,We make videos using this softwhere bot which aims to make it easier for you t...