Today we’re going to discuss how 3D graphics are created and then …
Today we’re going to discuss how 3D graphics are created and then rendered for a 2D screen. From polygon count and meshes, to lighting and texturing, there are a lot of considerations in building the 3D objects we see in our movies and video games, but then displaying these 3D objects of a 2D surface adds an additional number of challenges. So we’ll talk about some of the reasons you see occasional glitches in your video games as well as the reason a dedicated graphics processing unit, or GPU, was needed to meet the increasing demand for more and more complex graphics.
So now that we’ve built and programmed our very own CPU, we’re …
So now that we’ve built and programmed our very own CPU, we’re going to take a step back and look at how CPU speeds have rapidly increased from just a few cycles per second to gigahertz! Some of that improvement, of course, has come from faster and more efficient transistors, but a number hardware designs have been implemented to boost performance. And you’ve probably heard or read about a lot of these - they’re the buzz words attached to just about every new CPU release - terms like instruction pipelining, cache, FLOPS, superscalar, branch prediction, multi-core processors, and even super computers! These designs are pretty complicated, but the fundamental concepts behind them are not. So bear with us as we introduce a lot of new terminology including what might just be the best computer science term of all time: the dirty bit. Let us explain.
Today we’re going to take a step back from programming and discuss …
Today we’re going to take a step back from programming and discuss the person who formulated many of the theoretical concepts that underlie modern computation - the father of computer science himself: Alan Turing. Now normally we try to avoid “Great Man" history in Crash Course because truthfully all milestones in humanity are much more complex than just an individual or through a single lens - but for Turing we are going to make an exception. From his theoretical Turing Machine and work on the Bombe to break Nazi Enigma codes during World War II, to his contributions in the field of Artificial Intelligence (before it was even called that), Alan Turing helped inspire the first generation of computer scientists - despite a life tragically cut short.
Today, Carrie Anne is going to take a look at how those …
Today, Carrie Anne is going to take a look at how those transistors we talked about last episode can be used to perform complex actions. With the just two states, on and off, the flow of electricity can be used to perform a number of logical operations, which are guided by a branch of mathematics called Boolean Algebra. We’re going to focus on three fundamental operations - NOT, AND, and OR - and show how they were created in a series of really useful circuits. And its these simple electrical circuits that lay the groundwork for our much more complex machines.
*CORRECTION* AT the graph says "Quinary System" but then the graph shows 10 possible states - which is actually decimal. Technically, there should be only 5 possible values there, but the overall concept is still the same.
Today we’re going to build the ticking heart of every computer - …
Today we’re going to build the ticking heart of every computer - the Central Processing Unit or CPU. The CPU’s job is to execute the programs we know and love - you know like GTA V, Slack... and Power Point. To make our CPU we’ll bring in our ALU and RAM we made in the previous two episodes and then with the help of Carrie Anne’s wonderful dictation (slowly) step through some clock cycles. WARNING: this is probably the most complicated episode in this series, we watched this a few times over ourselves, but don't worry at about .03Hz we think you can keep up.
Today we’re going to step back from hardware and software, and take …
Today we’re going to step back from hardware and software, and take a closer look at how the backdrop of the cold war and space race and the rise of consumerism and globalization brought us from huge, expensive codebreaking machines in the 1940s to affordable handhelds and personal computers in the 1970s. This is an era that saw huge government funded projects - like the race to the moon. And afterward, a shift towards the individual consumer, commoditization of components, and the rise of the Japanese electronics industry.
So last episode we talked about some basic file formats, but what …
So last episode we talked about some basic file formats, but what we didn’t talk about is compression. Often files are way too large to be easily stored on hard drives or transferred over the Internet - the solution, unsurprisingly, is to make them smaller. Today, we’re going to talk about lossless compression, which will give you the exact same thing when reassembled, as well as lossy compression, which uses the limitations of human perception to remove less important data. From listening to music and sharing photos, to talking on the phone and even streaming this video right now the ways we use the Internet and our computing devices just wouldn’t be possible without the help of compression.
Today we start a three episode arc on the rise of a …
Today we start a three episode arc on the rise of a global telecommunications network that changed the world forever. We’re going to begin with computer networks, and how they grew from small groups of connected computers on LAN networks to eventually larger worldwide networks like the ARPANET and even the Internet we know today. We'll also discuss how many technologies like Ethernet, MAC addresses, IP Addresses, packet switching, network switches, and TCP/IP were implemented to new problems as our computers became ever-increasingly connected. Next week we’ll talk about the Internet, and the week after the World Wide Web!
Today we’re going to talk about how computers see. We’ve long known …
Today we’re going to talk about how computers see. We’ve long known that our digital cameras and smartphones can take incredibly detailed images, but taking pictures is not quite the same thing. For the past half-century, computer scientists have been working to help our computing devices understand the imagery they capture, leading to advancements everywhere, from tracking hands and whole bodies, biometrics to unlock our phones, and eventually giving autonomous cars the ability to understand their surroundings.
Today we’re going to talk about how to keep information secret, and …
Today we’re going to talk about how to keep information secret, and this isn’t a new goal. From as early as Julius Caesar’s Caesar cipher to Mary, Queen of Scots, encrypted messages to kill Queen Elizabeth in 1587, theres has long been a need to encrypt and decrypt private correspondence. This proved especially critical during World War II as Allan Turing and his team at Bletchley Park attempted to decrypt messages from Nazi Enigma machines, and this need has only grown as more and more information sensitive tasks are completed on our computers. So today, we’re going to walk you through some common encryption techniques such as the Advanced Encryption Standard (AES), Diffie-Hellman Key Exchange, and RSA which are employed to keep your information safe, private, and secure.
Note: In October of 2017, researchers released a viable hack against WPA2, known as KRACK Attack, which uses AES to ensure secure communication between computers and network routers. The problem isn't with AES, which is provably secure, but with the communication protocol between router and computer. In order to set up secure communication, the computer and router have to agree through what's called a "handshake". If this handshake is interrupted in just the right way, an attacker can cause the handshake to fault to an insecure state and reveal critical information which makes the connection insecure. As is often the case with these situations, the problem is with an implementation, not the secure algorithm itself.
Cybersecurity is a set of techniques to protect the secrecy, integrity, and …
Cybersecurity is a set of techniques to protect the secrecy, integrity, and availability of computer systems and data against threats. In today’s episode, we’re going to unpack these three goals and talk through some strategies we use like passwords, biometrics, and access privileges to keep our information as secure, but also as accessible as possible. From massive Denial of Service, or DDos attacks, to malware and brute force password cracking there are a lot of ways for hackers to gain access to your data, so we’ll also discuss some strategies like creating strong passwords, and using 2-factor authentication, to keep your information safe.
Today we’re going to talk about on how we organize the data …
Today we’re going to talk about on how we organize the data we use on our devices. You might remember last episode we walked through some sorting algorithms, but skipped over how the information actually got there in the first place! And it is this ability to store and access information in a structured and meaningful way that is crucial to programming. From strings, pointers, and nodes, to heaps, trees, and stacks get ready for an ARRAY of new terminology and concepts.
Hello, world! Welcome to Crash Course Computer Science! So today, we’re going …
Hello, world! Welcome to Crash Course Computer Science! So today, we’re going to take a look at computing’s origins, because even though our digital computers are relatively new, the need for computation is not. Since the start of civilization itself, humans have had an increasing need for special devices to help manage laborious tasks, and as the scale of society continued to grow, these computational devices began to play a crucial role in amplifying our mental abilities. From the abacus and astrolabe to the difference engine and tabulating machine, we’ve come a long way to satisfying this increasing need, and in the process completely transformed commerce, government, and daily life.
Since Joseph Marie Jacquard’s textile loom in 1801, there has been a …
Since Joseph Marie Jacquard’s textile loom in 1801, there has been a demonstrated need to give our machines instructions. In the last few episodes, our instructions were already in our computer’s memory, but we need to talk about how they got there - this is the heart of programming. Today, we’re going to look at the history of programming and the innovations that brought us from punch cards and punch paper tape to plugboards and consoles of switches. These technologies will bring us to the mid 1970s and the start of home computing, but they had limitations, and what was really needed was an easier and more accessible way to write programs - programming languages. Which we’ll get to next week.
Today we’re going to go a little meta and talk about how …
Today we’re going to go a little meta and talk about how computer science can support learning with educational technology. We here at Crash Course are big fans of interactive in-class learning and hands-on experiences, but we also believe in the additive power of educational technology inside and outside the classroom from the Internet itself and Massive Open Online Courses, or MOOCs to AI driven intelligent tutoring systems and virtual reality.
So we ended last episode at the start of the 20th century …
So we ended last episode at the start of the 20th century with special purpose computing devices such as Herman Hollerith’s tabulating machines. But as the scale of human civilization continued to grow as did the demand for more sophisticated and powerful devices. Soon these cabinet-sized electro-mechanical computers would grow into room-sized behemoths that were prone to errors. But is was these computers that would help usher in a new era of computation - electronic computing.
Today we’re going to look at how our computers read and interpret …
Today we’re going to look at how our computers read and interpret computer files. We’ll talk about how some popular file formats like txt, wave, and bitmap are encoded and decoded giving us pretty pictures and lifelike recordings from just strings of 1’s and 0’s, and we’ll discuss how our computers are able to keep all this data organized and readily accessible to users. You’ll notice in this episode that we’re starting to talk more about computer users, not programmers, foreshadowing where the series will be going in a few episodes.
So we ended last episode with programming at the hardware level with …
So we ended last episode with programming at the hardware level with things like plugboards and huge panels of switches, but what was really needed was a more versatile way to program computers - software! For much of this series we’ve been talking about machine code, or the 1’s and 0’s our computers read to perform operations, but giving our computers instructions in 1’s and 0’s is incredibly inefficient, and a “higher-level” language was needed. This led to the development of assembly code and assemblers that allow us to use operands and mnemonics to more easily write programs, but assembly language is still tied to underlying hardware. So by 1952 Navy officer Grace Hopper had helped created the first high-level programming language A-0 and compiler to translate that code to our machines. This would eventually lead to IBM’s Fortran and then a golden age of computing languages over the coming decades. Most importantly, these new languages utilized new abstractions to make programming easier and more powerful giving more and more people the ability to create new and amazing things.
Today, we're going to discuss the critical role graphical user interfaces, or …
Today, we're going to discuss the critical role graphical user interfaces, or GUIs played in the adoption of computers. Before the mid 1980's the most common way people could interact with their devices was through command line interfaces, which though efficient, aren't really designed for casual users. This all changed with the introduction of the Macintosh by Apple in 1984. It was the first mainstream computer to use a GUI, standing on the shoulder of nearly two decades of innovation including work from the father of the GUI himself, Douglas Englebart, and some amazing breakthroughs at Xerox Parc.
Today we're going to talk about hackers and their strategies for breaking …
Today we're going to talk about hackers and their strategies for breaking into computer systems. Now, not all hackers are are malicious cybercriminals intent on stealing your data (these people are known as Black Hats). There are also White Hats who hunt for bugs, close security holes, and perform security evaluations for companies. And there are a lot of different motivations for hackers—sometimes just amusement or curiosity, sometimes for money, and sometimes to promote social or political goals. Regardless, we're not going to teach you how to become a hacker in this episode but we are going to walk you through some of the strategies hackers use to gain access to your devices, so you can be better prepared to keep your data safe.
*CORRECTION* AT "whatever" should not have a leading ' The correct username field should be: whatever’; DROP TABLE users;
No restrictions on your remixing, redistributing, or making derivative works. Give credit to the author, as required.
Your remixing, redistributing, or making derivatives works comes with some restrictions, including how it is shared.
Your redistributing comes with some restrictions. Do not remix or make derivative works.
Most restrictive license type. Prohibits most uses, sharing, and any changes.
Copyrighted materials, available under Fair Use and the TEACH Act for US-based educators, or other custom arrangements. Go to the resource provider to see their individual restrictions.