Largest Cloud Bitcoin Mining Company | Genesis Mining
Cryptocurrency: This hasn’t happened since 2015
66 Mustang for sale in BTC! : Bitcoin
Console gaming is hardly different from PC gaming, and much of what people say about PC gaming to put it above console gaming is often wrong.
I’m not sure about you, but for the past few years, I’ve been hearing people go on and on about PCs "superiority" to the console market. People cite various reasons why they believe gaming on a PC is “objectively” better than console gaming, often for reasons related to power, costs, ease-of-use, and freedom. …Only problem: much of what they say is wrong. There are many misconceptions being thrown about PC gaming vs Console gaming, that I believe need to be addressed. This isn’t about “PC gamers being wrong,” or “consoles being the best,” absolutely not. I just want to cut through some of the stuff people use to put down console gaming, and show that console gaming is incredibly similar to PC gaming. I mean, yes, this is someone who mainly games on console, but I also am getting a new PC that I will game on as well, not to mention the 30 PC games I already own and play. I’m not particularly partial to one over the other. Now I will mainly be focusing on the PlayStation side of the consoles, because I know it best, but much of what I say will apply to Xbox as well. Just because I don’t point out many specific Xbox examples, doesn’t mean that they aren’t out there.
“PCs can use TVs and monitors.”
This one isn’t so much of a misconception as it is the implication of one, and overall just… confusing. This is in some articles and the pcmasterrace “why choose a PC” section, where they’re practically implying that consoles can’t do this. I mean, yes, as long as the ports of your PC match up with your screen(s) inputs, you could plug a PC into either… but you could do the same with a console, again, as long as the ports match up. I’m guessing the idea here is that gaming monitors often use Displayport, as do most dedicated GPUs, and consoles are generally restricted to HDMI… But even so, monitors often have HDMI ports. In fact, PC Magazine has just released their list of the best gaming monitors of 2017, and every single one of them has an HDMI port. A PS4 can be plugged into these just as easily as a GTX 1080. I mean, even if the monitoTV doesn’t have HDMI or AV to connect with your console, just use an adaptor. If you have a PC with ports that doesn’t match your monitoTV… use an adapter. I don’t know what the point of this argument is, but it’s made a worrying amount of times.
“On PC, you have a wide range of controller options, but on console you’re stuck with the standard controller."
Are you on PlayStation and wish you could use a specific type of controller that suits your favorite kind of gameplay? Despite what some may believe, you have just as many options as PC. Want to play fighting games with a classic arcade-style board, featuring the buttons and joystick? Here you go! Want to get serious about racing and get something more accurate and immersive than a controller? Got you covered. Absolutely crazy about flying games and, like the racers, want something better than a controller? Enjoy! Want Wii-style motion controls? Been around since the PS3. If you prefer the form factor of the Xbox One controller but you own a PS4, Hori’s got you covered. And of course, if keyboard and mouse it what keeps you on PC, there’s a PlayStation compatible solution for that. Want to use the keyboard and mouse that you already own? Where there’s a will, there’s a way. Of course, these aren’t isolated examples, there are plenty of options for each of these kind of controllers. You don’t have to be on PC to enjoy alternate controllers.
“On PC you could use Steam Link to play anywhere in your house and share games with others.”
PS4 Remote play app on PC/Mac, PSTV, and PS Vita. PS Family Sharing. Using the same PSN account on multiple PS4s/Xbox Ones and PS3s/360s, or using multiple accounts on the same console. In fact, if multiple users are on the same PS4, only one has to buy the game for both users to play it on that one PS4. On top of that, only one of them has to have PS Plus for both to play online (if the one with PS Plus registers the PS4 as their main system). PS4 Share Play; if two people on separate PS4s want to play a game together that only one of them owns, they can join a Party and the owner of the game can have their friend play with them in the game. Need I say more?
“Gaming is more expensive on console.”
Part one, the Software This is one that I find… genuinely surprising. There’s been a few times I’ve mentioned that part of the reason I chose a PS4 is for budget gaming, only to told that “games are cheaper on Steam.” To be fair, there are a few games on PSN/XBL that are more expensive than they are on Steam, so I can see how someone could believe this… but apparently they forgot about disks. Dirt Rally, a hardcore racing sim game that’s… still $60 on all 3 platforms digitally… even though its successor is out.
See my point? Often times the game is cheaper on console because of the disk alternative that’s available for practically every console-available game. Even when the game is brand new. Dirt 4 - Remember that Dirt Rally successor I mentioned?
Yes, you could either buy this relatively new game digitally for $60, or just pick up the disk for a discounted price. And again, this is for a game that came out 2 months ago, and even it’s predecessor’s digital cost is locked at $60. Of course, I’m not going to ignore the fact that Dirt 4 is currently (as of writing this) discounted on Steam, but on PSN it also happens to be discounted for about the same amount. Part 2: the Subscription Now… let’s not ignore the elephant in the room: PS Plus and Xbox Gold. Now these would be ignorable, if they weren’t required for online play (on the PlayStation side, it’s only required for PS4, but still). So yes, it’s still something that will be included in the cost of your PS4 or Xbox One/360, assuming you play online. Bummer, right? Here’s the thing, although that’s the case, although you have to factor in this $60 cost with your console, you can make it balance out, at worst, and make it work out for you as a budget gamer, at best. As nice as it would be to not have to deal with the price if you don’t want to, it’s not like it’s a problem if you use it correctly. Imagine going to a new restaurant. This restaurant has some meals that you can’t get anywhere else, and fair prices compared to competitors. Only problem: you have to pay a membership fee to have the sides. Now you can have the main course, sit down and enjoy your steak or pasta, but if you want to have a side to have a full meal, you have to pay an annual fee. Sounds shitty, right? But here’s the thing: not only does this membership allow you to have sides with your meal, but it also allows you to eat two meals for free every month, and also gives you exclusive discounts for other meals, drinks, and desserts. Let’s look at PS Plus for a minute: for $60 per year, you get:
2 free PS4 games, every month
2 free PS3 games, every month
1 PS4/PS3 and Vita compatible game, and 1 Vita-only game, every month
Exclusive/Extended discounts, especially during the weekly/seasonal sales (though you don’t need PS Plus to get sales, PS Plus members get to enjoy the best sales)
access to online multiplayer
So yes, you’re paying extra because of that membership, but what you get with that deal pays for it and then some. In fact, let’s ignore the discounts for a minute: you get 24 free PS4 games, 24 free PS3 games, and 12 Vita only + 12 Vita compatible games, up to 72freegames every year. Even if you only one of these consoles, that’s still 24 free games a year. Sure, maybe you get games for the month that you don’t like, then just wait until next month. In fact, let’s look at Just Cause 3 again. It was free for PS Plus members in August, which is a pretty big deal. Why is this significant? Because it’s, again, a $60 digital game. That means with this one download, you’ve balanced out your $60 annual fee. Meaning? Every free game after that is money saved, every discount after that is money saved. And this is a trend: every year, PS Plus will release a game that balances out the entire service cost, then another 23 more that will only add icing to that budget cake. Though, you could just count games as paying off PS Plus until you hit $60 in savings, but still. All in all, PS Plus, and Xbox Gold which offers similar options, saves you money. On top of that, again, you don't need to have these to get discounts, but with these memberships, you get more discounts. Now, I’ve seen a few Steam games go up for free for a week, but what about being free for an entire month? Not to mention that; even if you want to talk about Steam Summer Sales, what about the PSN summer sale, or again, disc sale discounts? Now a lot of research and math would be needed to see if every console gamer would save money compared to every Steam gamer for the same games, but at the very least? The costs will balance out, at worst. Part 3, the Systems
Xbox and PS2: $299
Xbox 360 and PS3: $299 and $499, respectively
Xbox One and PS4: $499 and $399, respectively.
Rounded up a few dollars, that’s $1,000 - $1,300 in day-one consoles, just to keep up with the games! Crazy right? So called budget systems, such a rip-off. Well, keep in mind that the generations here aren’t short. The 6th generation, from the launch of the PS2 to the launch of the next generation consoles, lasted 5 years, 6 years based on the launch of the PS3 (though you could say it was 9 or 14, since the Xbox wasn’t discontinued until 2009, and the PS2 was supported all the way to 2014, a year after the PS4 was released). The 7th gen lasted 7 - 8 years, again depending on whether you count the launch of the Xbox 360 to PS3. The 8th gen so far has lasted 4 years. That’s 17 years that the console money is spread over. If you had a Netflix subscription for it’s original $8 monthly plan for that amount of time, that would be over $1,600 total. And let’s be fair here, just like you could upgrade your PC hardware whenever you wanted, you didn’t have to get a console from launch. Let’s look at PlayStation again for example: In 2002, only two years after its release, the PS2 retail price was cut from $300 to $200. The PS3 Slim, released 3 years after the original, was $300, $100-$200 lower than the retail cost. The PS4? You could’ve either gotten the Uncharted bundle for $350, or one of the PS4 Slim bundles for $250. This all brings it down to $750 - $850, which again, is spread over a decade and a half. This isn’t even counting used consoles, sales, or the further price cuts that I didn’t mention. Even if that still sounds like a lot of money to you, even if you’re laughing at the thought of buying new systems every several years, because your PC “is never obsolete,” tell me: how many parts have you changed out in your PC over the years? How many GPUs have you been through? CPUs? Motherboards? RAM sticks, monitors, keyboards, mice, CPU coolers, hard drives— that adds up. You don’t need to replace your entire system to spend a lot of money on hardware. Even if you weren’t upgrading for the sake of upgrading, I’d be amazed if the hardware you’ve been pushing by gaming would last for about 1/3 of that 17 year period. Computer parts aren’t designed to last forever, and really won’t when you’re pushing them with intensive gaming for hours upon hours. Generally speaking, your components might last you 6-8 years, if you’ve got the high-end stuff. But let’s assume you bought a system 17 years ago that was a beast for it’s time, something so powerful, that even if it’s parts have degraded over time, it’s still going strong. Problem is: you will have to upgrade something eventually. Even if you’ve managed to get this far into the gaming realm with the same 17 year old hardware, I’m betting you didn’t do it with a 17 year Operating System. How much did Windows 7 cost you? Or 8.1? Or 10? Oh, and don’t think you can skirt the cost by getting a pre-built system, the cost of Windows is embedded into the cost of the machine (why else would Microsoft allow their OS to go on so many machines). Sure, Windows 10 was a free upgrade for a year, but that’s only half of it’s lifetime— You can’t get it for free now, and not for the past year. On top of that, the free period was an upgrade; you had to pay for 7 or 8 first anyway. Point is, as much as one would like to say that they didn’t need to buy a new system every so often for the sake of gaming, that doesn’t mean they haven’t been paying for hardware, and even if they’ve only been PC gaming recently, you’ll be spending money on hardware soon enough.
“PC is leading the VR—“
Let me stop you right there. If you add together the total number of Oculus Rifts and HTC Vives sold to this day, and threw in another 100,000 just for the sake of it, that number would still be under the number of PSVR headsets sold. Why could this possibly be? Well, for a simple reason: affordability. The systems needed to run the PC headsets costs $800+, and the headsets are $500 - $600, when discounted. PSVR on the other hand costs $450 for the full bundle (headset, camera, and move controllers, with a demo disc thrown in), and can be played on either a $250 - $300 console, or a $400 console, the latter recommended. Even if you want to say that the Vive and Rift are more refined, a full PSVR set, system and all, could cost just over $100 more than a Vive headset alone. If anything, PC isn’t leading the VR gaming market, the PS4 is. It’s the system bringing VR to the most consumers, showing them what the future of gaming could look like. Not to mention that as the PlayStation line grows more powerful (4.2 TFLOP PS4 Pro, 10 TFLOP “PS5…”), it won’t be long until the PlayStation line can use the same VR games as PC. Either way, this shows that there is a console equivalent to the PC VR options. Sure, there are some games you'd only be able to play on PC, but there are also some games you'd only be able to play on PSVR. …Though to be fair, if we’re talking about VR in general, these headsets don’t even hold a candle to, surprisingly, Gear VR.
“If it wasn’t for consoles holding devs back, then they would be able to make higher quality games.”
This one is based on the idea that because of how “low spec” consoles are, that when a developer has to take them in mind, then they can’t design the game to be nearly as good as it would be otherwise. I mean, have you ever seen the minimum specs for games on Steam? GTA V
Actually, bump up all the memory requirements to 8 GBs, and those are some decent specs, relatively speaking. And keep in mind these are the minimum specs to even open the games. It’s almost as if the devs didn’t worry about console specs when making a PC version of the game, because this version of the game isn’t on console. Or maybe even that the consoles aren’t holding the games back that much because they’re not that weak. Just a hypothesis. But I mean, the devs are still ooobviously having to take weak consoles into mind right? They could make their games sooo much more powerful if they were PC only, right? Right? No. Not even close. iRacing
CPU: Intel Core i3, i5, i7 or better or AMD Bulldozer or better
Memory: 8 GB RAM
GPU: NVidia GeForce 2xx series or better, 1GB+ dedicated video memory / AMD 5xxx series or better, 1GB+ dedicated video memory
These are PC only games. That’s right, no consoles to hold them back, they don’t have to worry about whether an Xbox One could handle it. Yet, they don’t require anything more than the Multiplatform games. Subnautica
So what’s the deal? Theoretically, if developers don’t have to worry about console specs, then why aren’t they going all-out and making games that no console could even dream of supporting? Low-end PCs. What, did you think people only game on Steam if they spent at least $500 on gaming hardware? Not all PC gamers have gaming-PC specs, and if devs close their games out to players who don’t have the strongest of PCs, then they’d be losing out on a pretty sizable chunk of their potential buyers. Saying “devs having to deal with consoles is holding gaming back” is like saying “racing teams having to deal with Ford is holding GT racing back.” A: racing teams don’t have to deal with Ford if they don’t want to, which is probably why many of them don’t, and B: even though Ford doesn’t make the fastest cars overall, they still manage to make cars that are awesome on their own, they don’t even need to be compared to anything else to know that they make good cars. I want to go back to that previous point though, developers having to deal with low-end PCs, because it’s integral to the next point:
“PCs are more powerful, gaming on PC provides a better experience.”
This one isn’t so much of a misconception as it is… misleading. Did you know that according to the Steam Hardware & Software Survey (July 2017) , the percentage of Steam gamers who use a GPU that's less powerful than that of a PS4Slim’s GPU is well over 50%? Things get dismal when compared to the PS4 Pro (Or Xbox One X). On top of that, the percentage of PC gamers who own a Nvidia 10 series card is about 20% (about 15% for the 1060, 1080 and 1070 owners). Now to be fair, the large majority of gamers have CPUs with considerably high clock speeds, which is the main factor in CPU gaming performance. But, the number of Steam gamers with as much RAM or more than a PS4 or Xbox One is less than 50%, which can really bottleneck what those CPUs can handle. These numbers are hardly better than they were in 2013, all things considered. Sure, a PS3/360 weeps in the face of even a $400 PC, but in this day in age, consoles have definitely caught up. Sure, we could mention the fact that even 1% of Steam accounts represents over 1 million accounts, but that doesn’t really matter compared to the 10s of millions of 8th gen consoles sold; looking at it that way, sure the number of Nvidia 10 series owners is over 20 million, but that ignores the fact that there are over 5 times more 8th gen consoles sold than that. Basically, even though PCs run on a spectrum, saying they're more powerful “on average” is actually wrong. Sure, they have the potential for being more powerful, but most of the time, people aren’t willing to pay the premium to reach those extra bits of performance. Now why is this important? What matters are the people who spent the premium cost for premium parts, right? Because of the previous point: PCs don’t have some ubiquitous quality over the consoles, developers will always have to keep low-end PCs in mind, because not even half of all PC players can afford the good stuff, and you have to look at the top quarter of Steam players before you get to PS4-Pro-level specs. If every Steam player were to get a PS4 Pro, it would be an upgrade for over 60% of them, and 70% of them would be getting an upgrade with the Xbox One X. Sure, you could still make the argument that when you pay more for PC parts, you get a better experience than you could with a console. We can argue all day about budget PCs, but a console can’t match up to a $1,000 PC build. It’s the same as paying more for car parts, in the end you get a better car. However, there is a certain problem with that…
“You pay a little more for a PC, you get much more quality.”
The idea here is that the more you pay for PC parts, the performance increases at a faster rate than the price does. Problem: that’s not how technology works. Paying twice as much doesn’t get you twice the quality the majority of the time. For example, let’s look at graphics cards, specifically the GeForce 10 series cards, starting with the GTX 1050.
1.35 GHz base clock
2 GB VRAM
This is our reference, our basis of comparison. Any percentages will be based on the 1050’s specs. Now let’s look at the GTX 1050 Ti, the 1050’s older brother.
1.29 GHz base clock
4 GB VRAM
This is pretty good. You only increase the price by about 27%, and you get an 11% increase in floating point speed and a 100% increase (double) in VRAM. Sure you get a slightly lower base clock, but the rest definitely makes up for it. In fact, according to GPU boss, the Ti managed 66 fps, or a 22% increase in frame rate for Battlefield 4, and a 54% increase in mHash/second in bitcoin mining. The cost increase is worth it, for the most part. But let’s get to the real meat of it; what happens when we double our budget? Surely we should see a massive increase performance, I bet some of you are willing to bet that twice the cost means more than twice the performance. The closest price comparison for double the cost is the GTX 1060 (3 GB), so let’s get a look at that.
1.5 GHz base clock
3 GB VRAM
Well… not substantial, I’d say. About a 50% increase in floating point speed, an 11% increase in base clock speed, and a 1GB decrease in VRAM. For [almost] doubling the price, you don’t get much. Well surely raw specs don’t tell the full story, right? Well, let’s look at some real wold comparisons. Once again, according to GPU Boss, there’s a 138% increase in hashes/second for bitcoin mining, and at 99 fps, an 83% frame rate increase in Battlefield 4. Well, then, raw specs does not tell the whole story! Here’s another one, the 1060’s big brother… or, well, slightly-more-developed twin.
1.5 GHz base clock
6 GB VRAM
Seems reasonable, another $50 for a decent jump in power and double the memory! But, as we’ve learned, we shouldn’t look at the specs for the full story. I did do a GPU Boss comparison, but for the BF4 frame rate, I had to look at Tom’s Hardware (sorry miners, GPU boss didn’t cover the mHash/sec spec either). What’s the verdict? Well, pretty good, I’d say. With 97 FPS, a 79% increase over the 1050— wait. 97? That seems too low… I mean, the 3GB version got 99. Well, let’s see what Tech Power Up has to say... 94.3 fps. 74% increase. Huh. Alright alright, maybe that was just a dud. We can gloss over that I guess. Ok, one more, but let’s go for the big fish: the GTX 1080.
1.6 GHz base clock
8 GB VRAM
That jump in floating point speed definitely has to be something, and 4 times the VRAM? Sure it’s 5 times the price, but as we saw, raw power doesn’t always tell the full story. GPU Boss returns to give us the run down, how do these cards compare in the real world? Well… a 222% (over three-fold) increase in mHash speed, and a 218% increase in FPS for Battlefield 4. That’s right, for 5 times the cost, you get 3 times the performance. Truly, the raw specs don’t tell the full story. You increase the cost by 27%, you increase frame rate in our example game by 22%. You increase the cost by 83%, you increase the frame rate by 83%. Sounds good, but if you increase the cost by 129%, and you get a 79% (-50% cost/power increase) increase in frame rate. You increase it by 358%, and you increase the frame rate by 218% (-140% cost/power increase). That’s not paying “more for much more power,” that’s a steep drop-off after the third cheapest option. In fact, did you know that you have to get to the 1060 (6GB) before you could compare the GTX line to a PS4 Pro? Not to mention that at $250, the price of a 1060 (6GB) you could get an entire PS4 Slim bundle, or that you have to get to the 1070 before you beat the Xbox One X. On another note, let’s look at a PS4 Slim…
800 MHz base clock
8 GB VRAM
…Versus a PS4 Pro.
911 MHz base clock
8 GB VRAM
128% increase in floating point speed, 13% increase in clock speed, for a 25% difference in cost. Unfortunately there is no Battlefield 4 comparison to make, but in BF1, the frame rate is doubled (30 fps to 60) and the textures are taken to 11. For what that looks like, I’ll leave it up to this bloke. Not to even mention that you can even get the texture buffs in 4K. Just like how you get a decent increase in performance based on price for the lower-cost GPUs, the same applies here. It’s even worse when you look at the CPU for a gaming PC. The more money you spend, again, the less of a benefit you get per dollar. Hardware Unboxed covers this in a video comparing different levels of Intel CPUs. One thing to note is that the highest i7 option (6700K) in this video was almost always within 10 FPS (though for a few games, 15 FPS) of a certain CPU in that list for just about all of the games. …That CPU was the lowest i3 (6100) option. The lowest i3 was $117 and the highest i7 was $339, a 189% price difference for what was, on average, a 30% or less difference in frame rate. Even the lowest Pentium option (G4400, $63) was often able to keep up with the i7. The CPU and GPU are usually the most expensive and power-consuming parts of a build, which is why I focused on them (other than the fact that they’re the two most important parts of a gaming PC, outside of RAM). With both, this “pay more to get much more performance” idea is pretty much the inverse of the truth.
“The console giants are bad for game developers, Steam doesn't treat developers as bad as Microsoft or especially Sony.”
Now one thing you might’ve heard is that the PS3 was incredibly difficult for developers to make games for, which for some, fueled the idea that console hardware is difficult too develop on compared to PC… but this ignores a very basic idea that we’ve already touched on: if the devs don’t want to make the game compatible with a system, they don’t have to. In fact, this is why Left 4 Dead and other Valve games aren’t on PS3, because they didn’t want to work with it’s hardware, calling it “too complex.” This didn’t stop the game from selling well over 10 million units worldwide. If anything, this was a problem for the PS3, not the dev team. This also ignores that games like LittleBigPlanet, Grand Theft Auto IV, and Metal Gear Solid 4 all came out in the same year as Left 4 Dead (2008) on PS3. Apparently, plenty of other dev teams didn’t have much of a problem with the PS3’s hardware, or at the very least, they got used to it soon enough. On top of that, when developing the 8th gen consoles, both Sony and Microsoft sought to use CPUs that were easier for developers, which included making decisions that considered apps for the consoles’ usage for more than gaming. On top of that, using their single-chip proprietary CPUs is cheaper and more energy efficient than buying pre-made CPUs and boards, which is far better of a reason for using them than some conspiracy about Sony and MS trying to make devs' lives harder. Now, console exclusives are apparently a point of contention: it’s often said that exclusive can cause developers to go bankrupt. However, exclusivity doesn’t have to be a bad thing for the developer. For example, when Media Molecule had to pitch their game to a publisher (Sony, coincidentally), they didn’t end up being tied into something detrimental to them. Their initial funding lasted for 6 months. From then, Sony offered additional funding, in exchange for Console Exclusivity. This may sound concerning to some, but the game ended up going on to sell almost 6 million units worldwide and launched Media Molecule into the gaming limelight. Sony later bought the development studio, but 1: this was in 2010, two years after LittleBigPlanet’s release, and 2: Media Molecule seem pretty happy about it to this day. If anything, signing up with Sony was one of the best things they could’ve done, in their opinion. Does this sound like a company that has it out for developers? There are plenty of examples that people will use to put Valve in a good light, but even Sony is comparatively good to developers.
“There are more PC gamers.”
The total number of active PC gamers on Steam has surpassed 120 million, which is impressive, especially considering that this number is double that of 2013’s figure (65 million). But the number of monthly active users on Xbox Live and PSN? About 120 million (1, 2) total. EDIT: You could argue that this isn't an apples-to-apples comparison, sure, so if you want to, say, compare the monthly number of Steam users to console? Steam has about half of what consoles do, at 67 million. Now, back to the 65 million total user figure for Steam, the best I could find for reference for PlayStation's number was an article giving the number of registered PSN accounts in 2013, 150 million. In a similar 4-year period (2009 - 2013), the number of registered PSN accounts didn’t double, it sextupled, or increased by 6 fold. Considering how the PS4 is already at 2/3 of the number of sales the PS3 had, even though it’s currently 3 years younger than its predecessor, I’m sure this trend is at least generally consistent. For example, let’s look at DOOM 2016, an awesome faced-paced shooting title with graphics galore… Of course, on a single platform, it sold best on PC/Steam. 2.36 million Steam sales, 2.05 million PS4 sales, 1.01 million Xbox One sales. But keep in mind… when you add the consoles sales together, you get over 3 million sales on the 8th gen systems. Meaning: this game was best sold on console. In fact, the Steam sales have only recently surpassed the PS4 sales. By the way VG charts only shows sales for physical copies of the games, so the number of PS4 and Xbox sales, when digital sales are included, are even higher than 3 million. This isn’t uncommon, by the way. Even with the games were the PC sales are higher than either of the consoles, there generally are more console sales total. But, to be fair, this isn’t anything new. The number of PC gamers hasn’t dominated the market, the percentages have always been about this much. PC can end up being the largest single platform for games, but consoles usually sell more copies total. EDIT: There were other examples but... Reddit has a 40,000-character limit.
This isn’t to say that there’s anything wrong with PC gaming, and this isn’t to exalt consoles. I’m not here to be the hipster defending the little guy, nor to be the one to try to put down someone/thing out of spite. This is about showing that PCs and consoles are overall pretty similar because there isn’t much dividing them, and that there isn’t anything wrong with being a console gamer. There isn’t some chasm separating consoles and PCs, at the end of the day they’re both computers that are (generally) designed for gaming. This about unity as gamers, to try to show that there shouldn’t be a massive divide just because of the computer system you game on. I want gamers to be in an environment where specs don't separate us; whether you got a $250 PS4 Slim or just built a $2,500 gaming PC, we’re here to game and should be able to have healthy interactions regardless of your platform. I’m well aware that this isn’t going to fix… much, but this needs to be said: there isn’t a huge divide between the PC and consoles, they’re far more similar than people think. There are upsides and downsides that one has that the other doesn’t on both sides. There’s so much more I could touch on, like how you could use SSDs or 3.5 inch hard drives with both, or that even though PC part prices go down over time, so do consoles, but I just wanted to touch on the main points people try to use to needlessly separate the two kinds of systems (looking at you PCMR) and correct them, to get the point across. I thank anyone who takes the time to read all of this, and especially anyone who doesn’t take what I say out of context. I also want to note that, again, thisisn’t “anti-PC gamer.” If it were up to me, everyone would be a hybrid gamer. Cheers.
Q: 1) Hello, what's a better strategy for bitcoin holders if it hard forks at 75%? Is it worth holding of the coins in the minority chain? Or better selling them? Will the value of coins in the majority chain be weakened or reinforced? Thank you A: 1) BIP109 does not hard fork at 75%, it hard forks 28 days after 75% has been reached-- so when the hard fork happens, there should be almost zero hash power on the minority chain. So there will not be a minority chain. If I am wrong and blocks are created on the minority chain, people plan to get enough hash power to replace those blocks with empty blocks, so it is impossible to make any transactions on the minority chain. Q: 2) if Bitcoin split into two chains, will it cause panic in the market, then the overall market capitalization fell? A: 2) Bitcoin split into two chains accidentally in March of 2013, and there was panic selling -- the price dropped from $48 to $37 within a few hours. But the mining pools very quickly agreed on which branch of the chain they would support, the problem was resolved within a day, and a week later the price was over $60. That shows the strength of consensus and incentives-- the mining pools did what was best for Bitcoin because that is what is best for themselves in the long term. Q: 3) Now it requres 60-70G space for a full node wallet, also it takes severals days for synchronization. Technically, Is it possible in the future that a full node wallet only cost a little space and can be quickly synchronized? (Do not use light wallets and other third party wallets) A: 3) You can run a pruned node that does not store the full block chain today (I’m running six right now on inexpensive servers around the world to test some new code). It is technically possible to get fast synchronization without giving up any trust, but it would require miners do more work (they would have to compute and store and validate an “unspent transaction output committment hash” in the block chain). There are also schemes that would give you fast synchronization at a lightweight-wallet level of trust, but worked towards no trust if you were connected to the network for long enough. Some developers say that you are not really using Bitcoin unless you run a full node, but that is wrong. Bitcoin was designed so that you can make the choice of speed and convenience versus trust. You give up very, very little trust if you run a lightweight wallet that supports multisignature transactions, and I think that is what most people should be running. Q: 4) What do you think about Ethereum? Can Bitcoin achieve all the same functions claimed by Ethernet? Thank you A: 4) I think most of the interesting things you can do with Ethereum you can also do with multi-signature Bitcoin transactions. I haven’t seen a really great use of Ethereum yet, and I think there will be a big problem with Ethereum smart contracts that are designed to steal people’s money, because very few people will have the skill necessary to tell if a complicated smart contract is correct. I’m watching the rootstock.io project, which brings Ethereum contracts to Bitcoin. Q: 5) Is it possible that Nakamoto may still participate in the development of Bitcoin by a pseudonym? What is the last time he contact you? Will he be back? A: 5) Yes, it is possible. I tell reporters who ask me about Satoshi: The idea of Bitcoin is important; who invented it is an interesting mystery, but I think it should remain a mystery until whoever invented it decides to step forward. We should respect Satoshi's privacy. Q: 6) Now some government can prevent people from accessing foreign information using technical method(like the Great Firewall), people need to get across the wall first if they want to know information abroad. So technically speaking, is it possible that the government could block and damage the usage of bitcoin? If it is, is there any method to get across the wall? A: 6) If a government controls network access into and out of their country (like the Great Firewall), they could easily block connections to and from today’s Bitcoin peer-to-peer network. Connections are not encrypted in any way, and most connect to port 8333, which would be easy to block. However, blocking connections inside the country would be much harder. And it only takes one encrypted or satellite or microwave or laser connection that bypasses the firewall to get around the blockage and get blocks and transactions flowing across the border again. I think governments that decide they don’t like Bitcoin are more likely to pass laws that make it a crime to use a currency other than the official government currency to pay for things. Q: 7) You insist on hard fork at 75%, while Chinse Mining Pools insist at 90%. So it may be easier to get support from China If Classic changes to 90%. Have you ever considered to communicate with Chinese mine pool( such as convening a meeting) to reduce differences? A: 7) Yes, I was in Beijing a few weeks ago to better understand what some of the Chinese mining pools are thinking. It was a productive meeting, and I look forward to communicating more with them soon. Q: 8) How will halving and block size increasing impact the bitcoin price in your opnion? Thanks. A: 8) The price, today, is a reflection of confidence. If people think Bitcoin will be valuable in the future, they are willing to buy it and hold it. Everybody knows the halving will happen, so, theoretically, that should not affect today’s price. I believe that increasing the block size limit would be very good for the price, because Bitcoin is more valuable the more people who are able to use it. Q: 9) Technically, bitcoin should also have drawbacks. Some disadvantages may be improved in the future , while some may be difficult to improve. What are those shortcomings for bitcoin to hard to improve in your opinon? Are you an optimist thinking that all technical shortcomings are temporary, and they will all likely to be improved in the future? A: 9) Every successful technology is full of shortcomings. It is always easier to look backwards and see your mistakes. Smart engineers are very good at working around those shortcomings, and wise engineering managers know when to work around a shortcoming to remain compatible with the existing technology and when it makes sense to break compatibility because eliminating a shortcoming would have large benefits. Q: 10) If there is a kind of altcoin in the future goes beyond Bitcoin, it must has the advantage Bitcoin can not have, right? Conversely, if Bitcoin itself evolves fast, improves and adds new features, it will be difficult to be surpassed and eliminated, right? What does Bitcoin scalability and evolution capability look like? A: 10) People are funny -- I can imagine an altcoin that has no technology advantages over Bitcoin, but some people prefer it for some reason. I live in a town where a lot of people care a lot about the environment, and I could imagine them deciding to use a “GreenCoin” where all miners must be inspected regularly and must use only solar power. I think many engineers tend to over-estimate the importance of new features, and under-estimate the importance of reliability, convenience and reputation. Satoshi designed Bitcoin to be very scalable, and to be able to evolve. I think the best way for any technology to scale and evolve is competition -- make the technology open, and let companies or teams compete to build the most reliable, convienent and secure products. That looks like (and is!) a very messy, chaotic process, but it produces better results, faster, than a single person or team deciding on on approach to solving every problem. Q: 11) If R3 succeeds, will it challenge bitcoin in transnational remittances? A: 11) Maybe -- if banks involved in R3 could make it very convenient to get money into and out of their blockchain. They might not be able to do that because of regulations, though. But I don’t know much about the international remittance market and what regulations the banks will have to deal with. Q: 12) Can blockchain only be secured by mining? Some private blockchain do not have mining property, are they really blockchain? A: 12) Security is not “yes it is secure” or “no it is not secure.” Proof of work (mining) is the most secure way we know of to secure a blockchain, but there are less secure methods that can work if less security is OK. And less security is OK for some private blockchains because if somebody cheats, they can be taken to court and money can be recovered. Q: 13) Will public chain, private chain and R3 chain coexist for a long time? Or only one chain survive finally? What is the relationship among Bitcoin block chain, private chains and R3 chain , complementary or competitive? Will Bitcoin block chain eventually win? A: 13) My guess is all of the “blockchain for everything” excitement will die down in a year or two and a lot of people will be disappointed. Then a few years later there will be blockchains for everything, running quietly inside stock markets and currency exchanges and lots of other places. Some of them will use the Bitcoin blockchain, some of them won’t, and nobody besides blockchain engineers will care much. Throughout it all, I think it is most likely Bitcoin continues to grow, hopefully with less drama as it gets bigger and more mature. Q: 14) Some people think that it is difficult for the outside world to understand the technical details if lightning network is controlled by blockstream or another company, resulting in technological centralization, what’s your opinion? A: 14) I don’t worry about that, the lightning protocol is being designed in the open as an open standard. It is complicated, but not so complicated only one person or company can understand it. Q: 15) What is the procedure Bitcoin Core modify the rules? Take the 2M hard fork proposal as an example, I saw there are concerns that if one of the five core developers who have write access reject the proposal will be rejected. So If happens, does that mean the launch hard ford in July will be abandoned? What is percentage of agreement in Core developers to write code for such a major bifurcation matter like 2M hard fork? Are there any specific standards? Or the lead developer has the final decision? A: 15) That is a good question for the current active Core developers. When I was the lead developer, I would make a final decision if a decision needed to be made.
Q: What do you think about the future of increasing bitcoin block size limit? A: It will happen sooner or later -- almost everybody agrees it must happen. I am still working to make it happen sooner, because the longer it takes, the worse for Bitcoin.
Q: What decision making process you think should be used for future bitcoin development? A: For example, WuJiHan's proposition of service providers and mining pools collecting individual mineuser opinion. Or, a non-profit making standard making committee like IEEE, consists of people with enough expertise in bitcoin and economy, finance? I think we should look at how development of other very successful technologies works (like email or the http protocol). I am not an expert, but open standards and open processes for participating in creating standards that are either adopted by the market or not (like the IETF process) seem to work the best.
Q: From my experience on Reddit, people now start to understand that evil is not Blockstream/Core's intention. They simply have a very different vision on how Bitcoin network should be running and on how future development should be heading. They do whatever they can to protect their vision, even dirty tricks, because they feel they are bringing justice. Similarly, in Chinese community, we do see the same situation. Many Chinese Bitcoiners that showed strong enthusiasm in the past differ with each other. This even happens among my own real-life friends. My question is: How can we separate these two groups of people who have widely divergent visions? Bitcoin cannot proceed when carrying two totally different visions. A: I don’t know! It is always best if everybody is free to work on their own vision, but for some reason some people seem to think that the block size limit will prevent big companies from taking over Bitcoin. I think all they will accomplish is making the technology much more complicated. And big companies are much better able to deal with and control highly complicated technologies.
Q: Please share your comments on ripple, Mr. Guru. A: I haven’t paid very much attention to Ripple- the last time I looked at it was probably two years ago. Back then I thought they would have trouble with governments wanting to regulate their gateway nodes as money transmitters, but I haven’t even taken the time to see if I was right about that.
Q: Hi Gavin, I think you had a disagreement with the Nakamoto roadmap in Bitcoin design. Can you explain why? Thank you. A: I assume you mean the part where Satoshi says he doesn’t think a second implementation will ever be a good idea. I just think Satoshi was wrong about that-- if you look at very successful protocols, they all have multiple compatible implementations. We understand a lot more about what it takes to be completely compatible and have much better tools to ensure compatibility. And the fact that there now are multiple compatible implementations working on the network (btcd being probably the best example) shows both that it is possible and that the other implementations are not a menace to the network.
Q: 1) For the dispute between Core and Classic, can we refer to the theory of “Common-pool resources” (Commons) in the Western cultural tradition to understand and grasp the public and neutral property of bitcoin so at to strive for a solution which can balance interests of all parties? A: 1) Maybe. The blockchain could be considered a Commons today-- a common, limited resource. But if control of the block size limit was given to miners, then I don’t think it fits the definition any more, because miners would have the freedom to restrict its use however they saw fit, on a block-by-block basis. That is just a simple, pure market, with transaction creators on one side and miners on the other. Q: 2) For the application requring "bitcoin multi-signature script", can you recommend any programming language, libraries or tools? A: 2) BitPay has some good tools: https://github.com/bitpay/bitcore I haven’t worked on any multisignature applications since writing the low-level protocol code-- there are probably other great libraries and tools that I just don’t know about.
Q: Hello Gavin, are you now still developing Classic? Will Classic proceed? Would you give up Classic and return to Core? A: Yes, yes, and there is no “return to” -- I plan on contributing to lots of projects.
Q: 1) If there are one million entrepreneurs who require fund and asset securitization via block chain technology, is it possible? A: 1) If there are ten million investors willing to fund those entrepreneurs, sure it is possible. The technology won’t be a problem, one million is not a large number for today’s computers. Q: 2) Why can we trust Bitcoin and what are the advantages of bitcoin in online payment and settlement? Its commission fee now is not as cheap as before, besides, the time for one confirm is not fast enough. Your opinions on pros and cons of Mining and PoW? A: 2) For people in places with good-enough banking systems like the United States or China, purchasing things inside their own country, bitcoin does not have much of an advantage over existing payment systems. But if you are buying something from somebody in another country, or you live in a place where there are no good payment systems, Bitcoin works very well. Proof of work and mining is the most fair, decentralized way to distribute new coins. They are also the best way of securing the network that we know of so far. Perhaps in 30 years when essentially all of the new coins have been mined and computer scientists have thoroughly studied other ways of securing the network it might make sense for Bitcoin to start to switch to something other than mining and proof-of-work to secure the network. Q: 3) How likely the possibility of replacing the existing legal currency with virtual currency? A: 3) Very unlikely in a large country. I can imagine a small country that uses a larger country’s currency deciding to switch to a crypto currency, though.
Q: 1) You have always insist on larger block. Some people share the same view, they just want to increase the block size, regardless of network bandwidth restrictions in China and other developing countries. How do you see this criticism? A: 1) Most people are using Bitcoin over very limited bandwidth connections-- most people use lightweight wallets. If you run a business that needs a fast connection to the Internet, then it is not expensive to rent a server in a data center that has very good bandwidth. Even inexpensive servers have plenty of bandwidth and CPU power to keep up with much higher transaction volume. If you insist on running a full node from your home, average connection speed in China today is 3.7 megabits per second, which is almost 1,000 transactions per second. Latency through the Great Firewall is a bigger issue right now, but there are several software solutions to that problem that people (including myself) are working on right now. Q: 2) In addition, I'm curious what is your opinion on the current Bitcoin Core team? There is no doubt? If so, why not act as a Core developer contributing code in Bitcoin Core to solve these problems? A: 2) I like most of the people on the current Bitcoin Core team, they are great. But there are a couple of people on that team I don’t want to work with, so I have decided to limit the amount of time I spend with that project.
Q: 1) Hello Gavin, I would like to ask you how long since your last contribution in Bitcoin Core or others related? Expect the big influence as one of the earliest contributors, do not you think you ought to talk about the code, mostly for the coutribution of development of Bitcoin? A from pangcong: 1)The last commit in bitcoin core made by Gavin is on September 30, 2015, after that Gavin was busy with bitcoin XT and bicoin classic. His actual development in bitcoin has never stopped, these records are very clear on github, if you want to ask questions which are obvious, please investigate first. A from Gavin: 1) Also: I submitted some patches to Bitcoin Core a few days ago. Q: 2) Also, you were a neutral software engineer before, seriously committed to improving the bitcoin. But now you're playing political means to enhance your impact on the future of Bitcoin, how do you respond with it? A from KuHaiBian: 2) Now the biggest problem in Bitcoin is not block size limit, but that there is only one development team, it is as dangerous as the situation that there is only one mining pool mining bitcoin. This is the biggest problem Gavin is trying to solve. A from Gavin: 2) I just give my honest opinion, and try to do what I can to make Bitcoin more successful.
Q: There is no systematic process for Bitcoin upgrades. Is there any regulation/restriction on the power of Core devs? How do we balance the conflict between the centrilized power of the devs with interest of the community consensus? Do you think Bitcoin need to learn from R3 chains or distributed ledger systems? I.e. setting up regulations to constrain the power of the devs, so that only devs with “restricted access” can contribute, not everyone. A: Competition is the best solution. If the Core team does not make their customers happy, then they will be replaced. It might take a year or more for another team to get the reputation for high-quality code that the Core team has acquired over the years.
Q: In 2016, you propose to increase block size limit to 8M, then doubled every two years. Is it still the most promising expansion plan in your opinion now? If it is, do you think it possible that the block size reach 8GB in 2036, particularly given the network speed and bandwith in developing countries. A: I think it would be best to eliminate the block size limit entirely, and let the miners decide if they should accept or reject blocks. The miners want Bitcoin to succeed, and will not choose a size so large the network cannot handle it. I don’t know if people would agree to eliminate the limit, though. A dynamic limit that grows, but prevents an extremely large ‘attack block’ would also be a good solution. The growing-8MB idea came from the idea that it should be possible for somebody on a home Internet connection to continue to validate every single transaction. However, more research showed that the bottleneck is not the connection from the Internet to our homes (even in China there is plenty of bandwidth there) but connections across international borders. In particular, the Great Firewall can sometimes greatly restrict bandwidth to and from China.
Q: Gavin, hello! What is the reason do you think the community rejected Bitcoin XT? A: It was a mistake to try to make more changes than just simply increasing the block size limit.
Q: Now the problem of block size limit is not so serious as before when Bitoin was attacked, and the Segwit has been deployed, so what is the controversy? Why have to argue to the bitter end, must we argue until bitcoin die? Gavin, we all know your contribution to Bitcoin. But in 2015, when you said in bitcoin software development, we need a "dictator" to resolve the dispute. I think you want to be this dictator. http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-June/008810.html A: Must we argue until bitcoin die: I think is is in the nature of people to argue, so I think we will be arguing about lots of things until either we die or Bitcoin dies. I think in a few years we will look back and wonder why there was so much arguing, but I also think some good things have come from all of the argument.
Q: 1) What do you think about Ethereum? Can smart contract run based on Bitcoin? A: 1) (This question is repeated. Please see Q18-4) Q: 2) What are the problems Miners may have to face after halving in July? Thanks! A: 2) There is a small risk that the halving will make a good fraction of the miners stop mining, because they will get about half of the bitcoins they got before the halving. And that might mean blocks take longer to create, which means less space for transactions, which might mean people get frustrated and leave Bitcoin. Which could drop the price even more, causing more miners to stop mining, more frustration, and so on. Miners tell me they have already planned ahead for the halving and this will not happen, which is why I think it is a small risk and I don’t think the halving will be a big problem for most miners. Q: 3) Where can we get the whole code and code review of bitcoin? A: 3) Bitcoin Core is at: https://github.com/bitcoin/bitcoin Bitcoin Classic: https://github.com/bitcoinclassic/bitcoinclassic btcd: https://github.com/btcsuite/btcd bitcore: https://github.com/bitpay/bitcore
Bitcoin Core 0.10.0 released | Wladimir | Feb 16 2015
Wladimir on Feb 16 2015: Bitcoin Core version 0.10.0 is now available from: https://bitcoin.org/bin/0.10.0/ This is a new major version release, bringing both new features and bug fixes. Please report bugs using the issue tracker at github: https://github.com/bitcoin/bitcoin/issues The whole distribution is also available as torrent: https://bitcoin.org/bin/0.10.0/bitcoin-0.10.0.torrent magnet:?xt=urn:btih:170c61fe09dafecfbb97cb4dccd32173383f4e68&dn;=0.10.0&tr;=udp%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce&tr;=udp%3A%2F%2Ftracker.publicbt.com%3A80%2Fannounce&tr;=udp%3A%2F%2Ftracker.ccc.de%3A80%2Fannounce&tr;=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr;=udp%3A%2F%2Fopen.demonii.com%3A1337&ws;=https%3A%2F%2Fbitcoin.org%2Fbin%2F Upgrading and downgrading How to Upgrade If you are running an older version, shut it down. Wait until it has completely shut down (which might take a few minutes for older versions), then run the installer (on Windows) or just copy over /Applications/Bitcoin-Qt (on Mac) or bitcoind/bitcoin-qt (on Linux). Downgrading warning Because release 0.10.0 makes use of headers-first synchronization and parallel block download (see further), the block files and databases are not backwards-compatible with older versions of Bitcoin Core or other software:
Blocks will be stored on disk out of order (in the order they are
received, really), which makes it incompatible with some tools or other programs. Reindexing using earlier versions will also not work anymore as a result of this.
The block index database will now hold headers for which no block is
stored on disk, which earlier versions won't support. If you want to be able to downgrade smoothly, make a backup of your entire data directory. Without this your node will need start syncing (or importing from bootstrap.dat) anew afterwards. It is possible that the data from a completely synchronised 0.10 node may be usable in older versions as-is, but this is not supported and may break as soon as the older version attempts to reindex. This does not affect wallet forward or backward compatibility. Notable changes Faster synchronization Bitcoin Core now uses 'headers-first synchronization'. This means that we first ask peers for block headers (a total of 27 megabytes, as of December 2014) and validate those. In a second stage, when the headers have been discovered, we download the blocks. However, as we already know about the whole chain in advance, the blocks can be downloaded in parallel from all available peers. In practice, this means a much faster and more robust synchronization. On recent hardware with a decent network link, it can be as little as 3 hours for an initial full synchronization. You may notice a slower progress in the very first few minutes, when headers are still being fetched and verified, but it should gain speed afterwards. A few RPCs were added/updated as a result of this:
getblockchaininfo now returns the number of validated headers in addition to
the number of validated blocks.
getpeerinfo lists both the number of blocks and headers we know we have in
common with each peer. While synchronizing, the heights of the blocks that we have requested from peers (but haven't received yet) are also listed as 'inflight'.
A new RPC getchaintips lists all known branches of the block chain,
including those we only have headers for. Transaction fee changes This release automatically estimates how high a transaction fee (or how high a priority) transactions require to be confirmed quickly. The default settings will create transactions that confirm quickly; see the new 'txconfirmtarget' setting to control the tradeoff between fees and confirmation times. Fees are added by default unless the 'sendfreetransactions' setting is enabled. Prior releases used hard-coded fees (and priorities), and would sometimes create transactions that took a very long time to confirm. Statistics used to estimate fees and priorities are saved in the data directory in the fee_estimates.dat file just before program shutdown, and are read in at startup. New command line options for transaction fee changes:
-txconfirmtarget=n : create transactions that have enough fees (or priority)
so they are likely to begin confirmation within n blocks (default: 1). This setting is over-ridden by the -paytxfee option.
-sendfreetransactions : Send transactions as zero-fee transactions if possible
(default: 0) New RPC commands for fee estimation:
estimatefee nblocks : Returns approximate fee-per-1,000-bytes needed for
a transaction to begin confirmation within nblocks. Returns -1 if not enough transactions have been observed to compute a good estimate.
estimatepriority nblocks : Returns approximate priority needed for
a zero-fee transaction to begin confirmation within nblocks. Returns -1 if not enough free transactions have been observed to compute a good estimate. RPC access control changes Subnet matching for the purpose of access control is now done by matching the binary network address, instead of with string wildcard matching. For the user this means that -rpcallowip takes a subnet specification, which can be
a single IP address (e.g. 22.214.171.124 or fe80::0012:3456:789a:bcde)
a network/CIDR (e.g. 126.96.36.199/24 or fe80::0000/64)
a network/netmask (e.g. 188.8.131.52/255.255.255.0 or fe80::0012:3456:789a:bcde/ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff)
An arbitrary number of -rpcallow arguments can be given. An incoming connection will be accepted if its origin address matches one of them. For example: | 0.9.x and before | 0.10.x | |--------------------------------------------|---------------------------------------| | -rpcallowip=192.168.1.1 | -rpcallowip=192.168.1.1 (unchanged) | | -rpcallowip=192.168.1.* | -rpcallowip=192.168.1.0/24 | | -rpcallowip=192.168.* | -rpcallowip=192.168.0.0/16 | | -rpcallowip=* (dangerous!) | -rpcallowip=::/0 (still dangerous!) | Using wildcards will result in the rule being rejected with the following error in debug.log:
Error: Invalid -rpcallowip subnet specification: *. Valid are a single IP (e.g. 184.108.40.206), a network/netmask (e.g. 220.127.116.11/255.255.255.0) or a network/CIDR (e.g. 18.104.22.168/24).
REST interface A new HTTP API is exposed when running with the -rest flag, which allows unauthenticated access to public node data. It is served on the same port as RPC, but does not need a password, and uses plain HTTP instead of JSON-RPC. Assuming a local RPC server running on port 8332, it is possible to request:
In every case, EXT can be bin (for raw binary data), hex (for hex-encoded binary) or json. For more details, see the doc/REST-interface.md document in the repository. RPC Server "Warm-Up" Mode The RPC server is started earlier now, before most of the expensive intialisations like loading the block index. It is available now almost immediately after starting the process. However, until all initialisations are done, it always returns an immediate error with code -28 to all calls. This new behaviour can be useful for clients to know that a server is already started and will be available soon (for instance, so that they do not have to start it themselves). Improved signing security For 0.10 the security of signing against unusual attacks has been improved by making the signatures constant time and deterministic. This change is a result of switching signing to use libsecp256k1 instead of OpenSSL. Libsecp256k1 is a cryptographic library optimized for the curve Bitcoin uses which was created by Bitcoin Core developer Pieter Wuille. There exist attacks against most ECC implementations where an attacker on shared virtual machine hardware could extract a private key if they could cause a target to sign using the same key hundreds of times. While using shared hosts and reusing keys are inadvisable for other reasons, it's a better practice to avoid the exposure. OpenSSL has code in their source repository for derandomization and reduction in timing leaks that we've eagerly wanted to use for a long time, but this functionality has still not made its way into a released version of OpenSSL. Libsecp256k1 achieves significantly stronger protection: As far as we're aware this is the only deployed implementation of constant time signing for the curve Bitcoin uses and we have reason to believe that libsecp256k1 is better tested and more thoroughly reviewed than the implementation in OpenSSL.  https://eprint.iacr.org/2014/161.pdf Watch-only wallet support The wallet can now track transactions to and from wallets for which you know all addresses (or scripts), even without the private keys. This can be used to track payments without needing the private keys online on a possibly vulnerable system. In addition, it can help for (manual) construction of multisig transactions where you are only one of the signers. One new RPC, importaddress, is added which functions similarly to importprivkey, but instead takes an address or script (in hexadecimal) as argument. After using it, outputs credited to this address or script are considered to be received, and transactions consuming these outputs will be considered to be sent. The following RPCs have optional support for watch-only: getbalance, listreceivedbyaddress, listreceivedbyaccount, listtransactions, listaccounts, listsinceblock, gettransaction. See the RPC documentation for those methods for more information. Compared to using getrawtransaction, this mechanism does not require -txindex, scales better, integrates better with the wallet, and is compatible with future block chain pruning functionality. It does mean that all relevant addresses need to added to the wallet before the payment, though. Consensus library Starting from 0.10.0, the Bitcoin Core distribution includes a consensus library. The purpose of this library is to make the verification functionality that is critical to Bitcoin's consensus available to other applications, e.g. to language bindings such as [python-bitcoinlib](https://pypi.python.org/pypi/python-bitcoinlib) or alternative node implementations. This library is called libbitcoinconsensus.so (or, .dll for Windows). Its interface is defined in the C header [bitcoinconsensus.h](https://github.com/bitcoin/bitcoin/blob/0.10/src/script/bitcoinconsensus.h). In its initial version the API includes two functions:
bitcoinconsensus_verify_script verifies a script. It returns whether the indicated input of the provided serialized transaction
correctly spends the passed scriptPubKey under additional constraints indicated by flags
bitcoinconsensus_version returns the API version, currently at an experimental 0
The functionality is planned to be extended to e.g. UTXO management in upcoming releases, but the interface for existing methods should remain stable. Standard script rules relaxed for P2SH addresses The IsStandard() rules have been almost completely removed for P2SH redemption scripts, allowing applications to make use of any valid script type, such as "n-of-m OR y", hash-locked oracle addresses, etc. While the Bitcoin protocol has always supported these types of script, actually using them on mainnet has been previously inconvenient as standard Bitcoin Core nodes wouldn't relay them to miners, nor would most miners include them in blocks they mined. bitcoin-tx It has been observed that many of the RPC functions offered by bitcoind are "pure functions", and operate independently of the bitcoind wallet. This included many of the RPC "raw transaction" API functions, such as createrawtransaction. bitcoin-tx is a newly introduced command line utility designed to enable easy manipulation of bitcoin transactions. A summary of its operation may be obtained via "bitcoin-tx --help" Transactions may be created or signed in a manner similar to the RPC raw tx API. Transactions may be updated, deleting inputs or outputs, or appending new inputs and outputs. Custom scripts may be easily composed using a simple text notation, borrowed from the bitcoin test suite. This tool may be used for experimenting with new transaction types, signing multi-party transactions, and many other uses. Long term, the goal is to deprecate and remove "pure function" RPC API calls, as those do not require a server round-trip to execute. Other utilities "bitcoin-key" and "bitcoin-script" have been proposed, making key and script operations easily accessible via command line. Mining and relay policy enhancements Bitcoin Core's block templates are now for version 3 blocks only, and any mining software relying on its getblocktemplate must be updated in parallel to use libblkmaker either version 0.4.2 or any version from 0.5.1 onward. If you are solo mining, this will affect you the moment you upgrade Bitcoin Core, which must be done prior to BIP66 achieving its 951/1001 status. If you are mining with the stratum mining protocol: this does not affect you. If you are mining with the getblocktemplate protocol to a pool: this will affect you at the pool operator's discretion, which must be no later than BIP66 achieving its 951/1001 status. The prioritisetransaction RPC method has been added to enable miners to manipulate the priority of transactions on an individual basis. Bitcoin Core now supports BIP 22 long polling, so mining software can be notified immediately of new templates rather than having to poll periodically. Support for BIP 23 block proposals is now available in Bitcoin Core's getblocktemplate method. This enables miners to check the basic validity of their next block before expending work on it, reducing risks of accidental hardforks or mining invalid blocks. Two new options to control mining policy:
-datacarrier=0/1 : Relay and mine "data carrier" (OP_RETURN) transactions
if this is 1.
-datacarriersize=n : Maximum size, in bytes, we consider acceptable for
"data carrier" outputs. The relay policy has changed to more properly implement the desired behavior of not relaying free (or very low fee) transactions unless they have a priority above the AllowFreeThreshold(), in which case they are relayed subject to the rate limiter. BIP 66: strict DER encoding for signatures Bitcoin Core 0.10 implements BIP 66, which introduces block version 3, and a new consensus rule, which prohibits non-DER signatures. Such transactions have been non-standard since Bitcoin v0.8.0 (released in February 2013), but were technically still permitted inside blocks. This change breaks the dependency on OpenSSL's signature parsing, and is required if implementations would want to remove all of OpenSSL from the consensus code. The same miner-voting mechanism as in BIP 34 is used: when 751 out of a sequence of 1001 blocks have version number 3 or higher, the new consensus rule becomes active for those blocks. When 951 out of a sequence of 1001 blocks have version number 3 or higher, it becomes mandatory for all blocks. Backward compatibility with current mining software is NOT provided, thus miners should read the first paragraph of "Mining and relay policy enhancements" above. 0.10.0 Change log Detailed release notes follow. This overview includes changes that affect external behavior, not code moves, refactors or string updates. RPC:
f923c07 Support IPv6 lookup in bitcoin-cli even when IPv6 only bound on localhost
b641c9c Fix addnode "onetry": Connect with OpenNetworkConnection
I have been trying to sell my 2015 Ford Focus for well over half a year now for BTC and it has been a nightmare. I haven't gotten a single request that is actually interested in the car, but instead I am getting shady people offering me bitcoin or paid memberships to miner clubs, some bullshit MLM stuff and what not. Bitcoin mining software can create a bit of a drag on your processor, but it remains to be seen if users will really notice it much. Meanwhile, uTorrent is free, ad-supported software , so they ASIC Miner Marketing & Sales Consultant . From 2017-2018, I worked with Halong Mining to bring the DragonMint model of miners to market. Halong was the first mining manufacturer to release a 10nm ASIC SHA-256 miner to... Bitcoin Miner Binary Options Trading. Feb 2016 – Present 4 years 6 months. Subcontractor Chevron. Jun 2013 – Oct 2015 2 years 5 months. Scotland, United Kingdom The Best Resource for Suppose for example that a goose egg appears in the transaction pool, and Innocent Ivan is the first miner lucky enough to create a valid Bitcoin block including this transaction. But suppose another miner, Greedy Griffith, is paying attention and controls significant mining power (but still under 1/3 of the total).
Bitcoin #Miner #EasyMiner. Bitcoin Miner software With Payment Proof
🍓 Best Bitcoin Mining Software That Work in 2020 🍓 - Duration: 5:34. ... Kevin Mitnick: Live Hack at CeBIT Global Conferences 2015 - Duration: 1:11:56. earn free btc, bitcoin mining 2020, connectbot, linux deploy, bitcoin mining script, bitcoin money adder android, bitcoin money adder free, bitcoin money adder apk, mine bitcoins, bitcoin money ... #bitcoin #bitcoinmining #bitcoinminingsoftware #freebitco #freebitcoin #hackblockchain #hackbitcoin #freebtcscript How To Get Free Bitcoin In 2020 (EASY) How... Best Bitcoin Mining Software: Best BTC Miners in 2020 Welcome to #Bitcoin #Miner Machine. Bitcoin Miner Machine a graphical frontend for mining Bitcoin, providing a convenient way to operate ... Bitcoin Miner 2020 No Fee Needed! Best Bitcoin Mining Software 2020 Great Bitcoin Miner Pro. ... New Ford Bronco Reveal ― 2021 Ford Bronco and Bronco Sport, Price, Interior, Off-Road ...