One Touch Binary Options Strategy for Winning trades
Touch / No Touch Binary Options - OneTouch Binary Trading
One Touch - TradeLSD
Touch and No Touch Options 🚫 Touch Options Trading
The Next Processor Change is Within ARMs Reach
As you may have seen, I sent the following Tweet: “The Apple ARM MacBook future is coming, maybe sooner than people expect” https://twitter.com/choco_bit/status/1266200305009676289?s=20 Today, I would like to further elaborate on that. tl;drApple will be moving to Arm based macs in what I believe are 4 stages, starting around 2015 and ending around 2023-2025: Release of T1 chip Macbooks, release of T2 chip Macbooks, Release of at least one lower end model Arm Macbook, and transitioning full lineup to Arm. Reasons for each are below. Apple is very likely going to switch to switch their CPU platform to their in-house silicon designs with an ARM architecture. This understanding is a fairly common amongst various Apple insiders. Here is my personal take on how this switch will happen and be presented to the consumer. The first question would likely be “Why would Apple do this again?”. Throughout their history, Apple has already made two other storied CPU architecture switches - first from the Motorola 68k to PowerPC in the early 90s, then from PowerPC to Intel in the mid 2000s. Why make yet another? Here are the leading reasons:
Intel has, in recent years, been making significant losses both in reputation and in actual product value, as well as velocity of product development, breaking their bi-yearly “Tick Tock” cycle for the first time in decades. Most recently, they have fallen well behind AMD’s processor lines in cost to performance ratio, CPU core count, core design (monolithic design vs “chiplet”), power consumption to performance, silicon supply (Intel with significant manufacturing process and yield issues), and on-silicon security features. While Intel still wins out in certain enterprise and datacenter applications, as well as having a much better reputation for reliability and QA (AMD having shipped numerous chips with a broken random- number generator that prevented even booting some mainstream operating system), the number of such applications slowly dwindles with each new release from AMD, and as confidence among decisionmakers in enterprise increases. In the public consciousness, Intel is quickly becoming a point of ridicule against Apple’s Mac lineup, rather than a badge of honor.
By moving to their own designs, Apple will be free from Intel’s release schedule, which have recently been unpredictable and faced with routine delays due to poor manufacturing yields. Apple will be able to update their Mac lineup on their own timeline, rather than being forced to delay products based on Intel’s ability to meet the release window. This also allows them to leverage relationships with other silicon fabricators to source chips, rather than relying on Intel ’s continued “iteration” that’s leading to a “14nm++++++++++” process, or the continued lack of product diversity with the 10nm process. Apple will also be free to innovate in the design of the silicon platform, rather than being limited by Intel’s design choices. By having full control of the manufacturing and development cycle, Apple can bring even more in-house optimization to the macOS, as they have been for iOS and iPadOS over the years.
Using an ARM architecture on the Macs allows for a more unified Apple ecosystem, rather than having separate Mac and iOS-based products. The only distinction will be the device form factor and performance characteristics.
The x86_64 architecture is very old and inefficient, using older methodologies for processor design (CISC vs ARM’s RISC), and the instruction set continues to require support in silicon for emulating 1980s-vintage 16-bit modes, as well as ineffectual and archaic memory addressing modes (segmentation, etc.) The x86_64 architecture is like a city, built atop a much older city, built atop a yet older city, but every layer is built with NYC infrastructure levels of complexity that suited its time and no further.
Over the last 10 years, Apple has shown that they can consistently produce impressive silicon designs, often leading the market in performance and capability, and Apple has been aggressively acquiring silicon design talent.
A common refrain heard on the Internet is the suggestion that Apple should switch to using CPUs made by AMD, and while this has been considered internally, it will most likely not be chosen as the path forward, even for their megalithic giants like the Mac Pro. Even though AMD would mitigate Intel’s current set of problems, it does nothing to help the issue of the x86_64 architecture’s problems and inefficiencies, on top of jumping to a platform that doesn’t have a decade of proven support behind it. Why spend a lot of effort re-designing and re- optimizing for AMD’s platform when you can just put that effort into your own, and continue the vertical integration Apple is well-known for? I believe that the internal development for the ARM transition started around 2015/2016 and is considered to be happening in 4 distinct stages. These are not all information from Apple insiders; some of these these are my own interpretation based off of information gathered from supply-chain sources, examination of MacBook schematics, and other indicators from Apple.
Stage1 (from 2014/2015 to 2017):
The rollout of computers with Apple’s T1 chip as a coprocessor. This chip is very similar to Apple’s T8002 chip design, which was used for the Apple Watch Series 1 and Series 2. The T1 is primarily present on the first TouchID enabled Macs, 2016 and 2017 model year MacBook Pros. Considering the amount of time required to design and validate a processor, this stage most likely started around 2014 or 2015, with early experimentation to see whether an entirely new chip design would be required, or if would be sufficient to repurpose something in the existing lineup. As we can see, the general purpose ARM processors aren’t a one- trick pony. To get a sense of the decision making at the time, let’s look back a bit. The year is 2016, and we're witnessing the beginning of stagnation of Intel processor lineup. There is not a lot to look forward to other than another “+” being added to the 14nm fabrication process. The MacBook Pro has used the same design for many years now, and its age is starting to show. Moving to AMD is still very questionable, as they’ve historically not been able to match Intel’s performance or functionality, especially at the high end, and since the “Ryzen” lineup is still unreleased, there is absolutely no benchmarks or other data to show they are worth consideration, and AMD’s most recent line of “Bulldozer” processors were very poorly received. Now is probably as good a time as any to begin experimenting with the in-house ARM designs, but it’s not time to dive into the deep end yet, our chips are not nearly mature enough to compete, and it’s not yet certain how long Intel will be stuck in the mud. As well, it is widely understood that Apple and Intel have an exclusivity contract in exchange for advantageous pricing. Any transition would take considerable time and effort, and since there are no current viable alternative to Intel, the in-house chips will need to advance further, and breaching a contract with Intel is too great a risk. So it makes sense to start with small deployments, to extend the timeline, stretch out to the end of the contract, and eventually release a real banger of a Mac. Thus, the 2016 Touch Bar MacBooks were born, alongside the T1 chip mentioned earlier. There are good reasons for abandoning the piece of hardware previously used for a similar purpose, the SMC or System Management Controller. I suspect that the biggest reason was to allow early analysis of the challenges that would be faced migrating Mac built- in peripherals and IO to an ARM-based controller, as well as exploring the manufacturing, power, and performance results of using the chips across a broad deployment, and analyzing any early failure data, then using this to patch any issues, enhance processes, and inform future designs looking towards the 2nd stage. The former SMC duties now moved to T1 includes things like
Fan speed, voltage, amperage and thermal sensor feedback data
FaceTime camera and microphone IO
PMIC (Power Management Controller)
Direct communication to NAND (solid state storage)
Direct communication with the Touch Bar
Secure Enclave for TouchID
The T1 chip also communicates with a number of other controllers to manage a MacBook’s behavior. Even though it’s not a very powerful CPU by modern standards, it’s already responsible for a large chunk of the machine’s operation. Moving control of these peripherals to the T1 chip also brought about the creation of the fabled BridgeOS software, a shrunken-down watchOS-based system that operates fully independently of macOS and the primary Intel processor. BridgeOS is the first step for Apple’s engineering teams to begin migrating underlying systems and services to integrate with the ARM processor via BridgeOS, and it allowed internal teams to more easily and safely develop and issue firmware updates. Since BridgeOS is based on a standard and now well-known system, it means that they can leverage existing engineering expertise to flesh out the T1’s development, rather than relying on the more arcane and specialized SMC system, which operates completely differently and requires highly specific knowledge to work with. It also allows reuse of the same fabrication pipeline used for Apple Watch processors, and eliminated the need to have yet another IC design for the SMC, coming from a separate source, to save a bit on cost. Also during this time, on the software side, “Project Marzipan”, today Catalyst, came into existence. We'll get to this shortly. For the most part, this Stage 1 went without any major issues. There were a few firmware problems at first during the product launch, but they were quickly solved with software updates. Now that engineering teams have had experience building for, manufacturing, and shipping the T1 systems, Stage 2 would begin.
Stage 2 encompasses the rollout of Macs with the T2 coprocessor, replacing the T1. This includes a much wider lineup, including MacBook Pro with Touch Bar, starting with 2018 models, MacBook Air starting with 2018 models, the iMac Pro, the 2019 Mac Pro, as well as Mac Mini starting in 2018. With this iteration, the more powerful T8012 processor design was used, which is a further revision of the T8010 design that powers the A10 series processors used in the iPhone 7. This change provided a significant increase in computational ability and brought about the integration of even more devices into T2. In addition to the T1’s existing responsibilities, T2 now controls:
Full audio subsystem
Secure Enclave for internal NAND storage and encryption/decryption offload
Management of the whole system’s power and startup sequence, allowing for trusted boot (ensure boot chain-of-trust with no malicious code/rootkit/bootkit)
Those last 2 points are crucial for Stage 2. Under this new paradigm, the vast majority of the Mac is now under the control of an in-house ARM processor. Stage 2 also brings iPhone-grade hardware security to the Mac. These T2 models also incorporated a supported DFU (Device Firmware Update, more commonly “recovery mode”), which acts similarly to the iPhone DFU mode and allows restoration of the BridgeOS firmware in the event of corruption (most commonly due to user-triggered power interruption during flashing). Putting more responsibility onto the T2 again allows for Apple’s engineering teams to do more early failure analysis on hardware and software, monitor stability of these machines, experiment further with large-scale production and deployment of this ARM platform, as well as continue to enhance the silicon for Stage 3. A few new user-visible features were added as well in this stage, such as support for the passive “Hey Siri” trigger, and offloading image and video transcoding to the T2 chip, which frees up the main Intel processor for other applications. BridgeOS was bumped to 2.0 to support all of these changes and the new chip. On the macOS software side, what was internally known as Project Marzipan was first demonstrated to the public. Though it was originally discovered around 2017, and most likely began development and testing within later parts of Stage 1, its effects could be seen in 2018 with the release of iPhone apps, now running on the Mac using the iOS SDKs: Voice Recorder, Apple News, Home, Stocks, and more, with an official announcement and public release at WWDC in 2019. Catalyst would come to be the name of Marzipan used publicly. This SDK release allows app developers to easily port iOS apps to run on macOS, with minimal or no code changes, and without needing to develop separate versions for each. The end goal is to allow developers to submit a single version of an app, and allow it to work seamlessly on all Apple platforms, from Watch to Mac. At present, iOS and iPadOS apps are compiled for the full gamut of ARM instruction sets used on those devices, while macOS apps are compiled for x86_64. The logical next step is to cross this bridge, and unify the instruction sets. With this T2 release, the new products using it have not been quite as well received as with the T1. Many users have noticed how this change contributes further towards machines with limited to no repair options outside of Apple’s repair organization, as well as some general issues with bugs in the T2. Products with the T2 also no longer have the “Lifeboat” connector, which was previously present on 2016 and 2017 model Touch Bar MacBook Pro. This connector allowed a certified technician to plug in a device called a CDM Tool (Customer Data Migration Tool) to recover data off of a machine that was not functional. The removal of this connector limits the options for data recovery in the event of a problem, and Apple has never offered any data recovery service, meaning that a irreparable failure of the T2 chip or the primary board would result in complete data loss, in part due to the strong encryption provided by the T2 chip (even if the data got off, the encryption keys were lost with the T2 chip). The T2 also brought about the linkage of component serial numbers of certain internal components, such as the solid state storage, display, and trackpad, among other components. In fact, many other controllers on the logic board are now also paired to the T2, such as the WiFi and Bluetooth controller, the PMIC (Power Management Controller), and several other components. This is the exact same system used on newer iPhone models and is quite familiar to technicians who repair iPhone logic boards. While these changes are fantastic for device security and corporate and enterprise users, allowing for a very high degree of assurance that devices will refuse to boot if tampered with in any way - even from storied supply chain attacks, or other malfeasance that can be done with physical access to a machine - it has created difficulty with consumers who more often lack the expertise or awareness to keep critical data backed up, as well as the funds to perform the necessary repairs from authorized repair providers. Other issues reported that are suspected to be related to T2 are audio “cracking” or distortion on the internal speakers, and the BridgeOS becoming corrupt following a firmware update resulting in a machine that can’t boot. I believe these hiccups will be properly addressed once macOS is fully integrated with the ARM platform. This stage of the Mac is more like a chimera of an iPhone and an Intel based computer. Technically, it does have all of the parts of an iPhone present within it, cellular radio aside, and I suspect this fusion is why these issues exist. Recently, security researchers discovered an underlying security problem present within the Boot ROM code of the T1 and T2 chip. Due to being the same fundamental platform as earlier Apple Watch and iPhone processors, they are vulnerable to the “checkm8” exploit (CVE-2019-8900). Because of how these chips operate in a Mac, firmware modifications caused by use of the exploit will persist through OS reinstallation and machine restarts. Both the T1 and T2 chips are always on and running, though potentially in a heavily reduced power usage state, meaning the only way to clean an exploited machine is to reflash the chip, triggering a restart, or to fully exhaust or physically disconnect the battery to flush its memory. Fortunately, this exploit cannot be done remotely and requires physical access to the Mac for an extended duration, as well as a second Mac to perform the change, so the majority of users are relatively safe. As well, with a very limited execution environment and access to the primary system only through a “mailbox” protocol, the utility of exploiting these chips is extremely limited. At present, there is no known malware that has used this exploit. The proper fix will come with the next hardware revision, and is considered a low priority due to the lack of practical usage of running malicious code on the coprocessor. At the time of writing, all current Apple computers have a T2 chip present, with the exception of the 2019 iMac lineup. This will change very soon with the expected release of the 2020 iMac lineup at WWDC, which will incorporate a T2 coprocessor as well. Note: from here on, this turns entirely into speculation based on info gathered from a variety of disparate sources. Right now, we are in the final steps of Stage 2. There are strong signs that an a MacBook (12”) with an ARM main processor will be announced this year at WWDC (“One more thing...”), at a Fall 2020 event, Q1 2021 event, or WWDC 2021. Based on the lack of a more concrete answer, WWDC2020 will likely not see it, but I am open to being wrong here.
Stage3 (Present/2021 - 2022/2023):
Stage 3 involves the first version of at least one fully ARM-powered Mac into Apple’s computer lineup. I expect this will come in the form of the previously-retired 12” MacBook. There are rumors that Apple is still working internally to perfect the infamous Butterfly keyboard, and there are also signs that Apple is developing an A14x based processors with 8-12 cores designed specifically for use as the primary processor in a Mac. It makes sense that this model could see the return of the Butterfly keyboard, considering how thin and light it is intended to be, and using an A14x processor would make it will be a very capable, very portable machine, and should give customers a good taste of what is to come. Personally, I am excited to test the new 12" “ARMbook”. I do miss my own original 12", even with all the CPU failure issues those older models had. It was a lovely form factor for me. It's still not entirely known whether the physical design of these will change from the retired version, exactly how many cores it will have, the port configuration, etc. I have also heard rumors about the 12” model possibly supporting 5G cellular connectivity natively thanks to the A14 series processor. All of this will most likely be confirmed soon enough. This 12” model will be the perfect stepping stone for stage 3, since Apple’s ARM processors are not yet a full-on replacement for Intel’s full processor lineup, especially at the high end, in products such as the upcoming 2020 iMac, iMac Pro, 16” MacBook Pro, and the 2019 Mac Pro. Performance of Apple’s ARM platform compared to Intel has been a big point of contention over the last couple years, primarily due to the lack of data representative of real-world desktop usage scenarios. The iPad Pro and other models with Apple’s highest-end silicon still lack the ability to execute a lot of high end professional applications, so data about anything more than video editing and photo editing tasks benchmarks quickly becomes meaningless. While there are completely synthetic benchmarks like Geekbench, Antutu, and others, to try and bridge the gap, they are very far from being accurate or representative of the real real world performance in many instances. Even though the Apple ARM processors are incredibly powerful, and I do give constant praise to their silicon design teams, there still just isn’t enough data to show how they will perform for real-world desktop usage scenarios, and synthetic benchmarks are like standardized testing: they only show how good a platform is at running the synthetic benchmark. This type of benchmark stresses only very specific parts of each chip at a time, rather than how well it does a general task, and then boil down the complexity and nuances of each chip into a single numeric score, which is not a remotely accurate way of representing processors with vastly different capabilities and designs. It would be like gauging how well a person performs a manual labor task based on averaging only the speed of every individual muscle in the body, regardless of if, or how much, each is used. A specific group of muscles being stronger or weaker than others could wildly skew the final result, and grossly misrepresent performance of the person as a whole. Real world program performance will be the key in determining the success and future of this transition, and it will have to be great on this 12" model, but not just in a limited set of tasks, it will have to be great at *everything*. It is intended to be the first Horseman of the Apocalypse for the Intel Mac, and it better behave like one. Consumers have been expecting this, especially after 15 years of Intel processors, the continued advancement of Apple’s processors, and the decline of Intel’s market lead. The point of this “demonstration” model is to ease both users and developers into the desktop ARM ecosystem slowly. Much like how the iPhone X paved the way for FaceID-enabled iPhones, this 12" model will pave the way towards ARM Mac systems. Some power-user type consumers may complain at first, depending on the software compatibility story, then realize it works just fine since the majority of the computer users today do not do many tasks that can’t be accomplished on an iPad or lower end computer. Apple needs to gain the public’s trust for basic tasks first, before they will be able to break into the market of users performing more hardcore or “Pro” tasks. This early model will probably not be targeted at these high-end professionals, which will allow Apple to begin to gather early information about the stability and performance of this model, day to day usability, developmental issues that need to be addressed, hardware failure analysis, etc. All of this information is crucial to Stage 4, or possibly later parts of Stage 3. The 2 biggest concerns most people have with the architecture change is app support and Bootcamp. Any apps released through the Mac App Store will not be a problem. Because App Store apps are submitted as LLVM IR (“Bitcode”), the system can automatically download versions compiled and optimized for ARM platforms, similar to how App Thinning on iOS works. For apps distributed outside the App Store, thing might be more tricky. There are a few ways this could go:
Developer will need to build both x86_64 and ARM version of their app - App Bundles have supported multiple-architecture binaries since the dawn of OS X and the PowerPC transition
Move to apps being distributed in an architecture-independent manner, as they are on the App Store. There is some software changes that are suggestive of this, such as the new architecture in dyld3.
An x86_64 instruction decoder in silicon - very unlikely due to the significant overhead this would create in the silicon design, and potential licensing issues. (ARM, being a RISC, “reduced instruction set”, has very few instructions; x86_64 has thousands)
Server-side ahead-of-time transpilation (converting x86 code to equivalent ARM code) using Notarization submissions - Apple certainly has the compiler chops in the LLVM team to do something like this
Outright emulation, similar to the approach that was taken in ARM releases of Windows, but received extremely poorly (limited to 32-bit apps, and very very slow)There could be other solutions in the works to fix this but I am not aware of any. This is just me speculating about some of the possibilities.
As for Bootcamp, while ARM-compatible versions of Windows do exist and are in development, they come with their own similar set of app support problems. Microsoft has experimented with emulating x86_64 on their ARM-based Surface products, and some other OEMs have created their own Windows-powered ARM laptops, but with very little success. Performance is a problem across the board, with other ARM silicon not being anywhere near as advanced, and with the majority of apps in the Windows ecosystem that were not developed in-house at Microsoft running terribly due to the x86_64 emulation software. If Bootcamp does come to the early ARM MacBook, it more than likely will run like very poorly for anything other than Windows UWP apps. There is a high chance it will be abandoned entirely until Windows becomes much more friendly to the architecture. I believe this will also be a very crucial turning point for the MacBook lineup as a whole. At present, the iPad Pro paired with the Magic Keyboard is, in many ways, nearly identical to a laptop, with the biggest difference being the system software itself. While Apple executives have outright denied plans of merging the iPad and MacBook line, that could very well just be a marketing stance, shutting the down rumors in anticipation of a well-executed surprise. I think that Apple might at least re-examine the possibility of merging Macs and iPads in some capacity, but whether they proceed or not could be driven by consumer reaction to both products. Do they prefer the feel and usability of macOS on ARM, and like the separation of both products? Is there success across the industry of the ARM platform, both at the lower and higher end of the market? Do users see that iPadOS and macOS are just 2 halves of the same coin? Should there be a middle ground, and a new type of product similar to the Surface Book, but running macOS? Should Macs and iPads run a completely uniform OS? Will iPadOS ever see exposed the same sort of UNIX-based tools for IT administrators and software developers that macOS has present? These are all very real questions that will pop up in the near future. The line between Stage 3 and Stage 4 will be blurry, and will depend on how Apple wishes to address different problems going forward, and what the reactions look like. It is very possible that only 12” will be released at first, or a handful more lower end model laptop and desktop products could be released, with high performance Macs following in Stage 4, or perhaps everything but enterprise products like Mac Pro will be switched fully. Only time will tell.
Stage 4 (the end goal):
Congratulations, you’re made it to the end of my TED talk. We are now well into the 2020s and COVID-19 Part 4 is casually catching up to the 5G = Virus crowd. All Macs have transitioned fully to ARM. iMac, MacBooks Pro and otherwise, Mac Pro, Mac Mini, everything. The future is fully Apple from top to bottom, and vertical integration leading to market dominance continues. Many other OEM have begun to follow in this path to some extent, creating more demand for a similar class of silicon from other firms. The remainder here is pure speculation with a dash of wishful thinking. There are still a lot of things that are entirely unclear. The only concrete thing is that Stage 4 will happen when everything is running Apple’s in- house processors. By this point, consumers will be quite familiar with the ARM Macs existing, and developers have had have enough time to transition apps fully over to the newly unified system. Any performance, battery life, or app support concerns will not be an issue at this point. There are no more details here, it’s the end of the road, but we are left with a number of questions. It is unclear if Apple will stick to AMD's GPUs or whether they will instead opt to use their in-house graphics solutions that have been used since the A11 series of processors. How Thunderbolt support on these models of Mac will be achieved is unknown. While Intel has made it openly available for use, and there are plans to have USB and Thunderbolt combined in a single standard, it’s still unclear how it will play along with Apple processors. Presently, iPhones do support connecting devices via PCI Express to the processor, but it has only been used for iPhone and iPad storage. The current Apple processors simply lack the number of lanes required for even the lowest end MacBook Pro. This is an issue that would need to be addressed in order to ship a full desktop-grade platform. There is also the question of upgradability for desktop models, and if and how there will be a replaceable, socketed version of these processors. Will standard desktop and laptop memory modules play nicely with these ARM processors? Will they drop standard memory across the board, in favor of soldered options, or continue to support user-configurable memory on some models? Will my 2023 Mac Pro play nicely with a standard PCI Express device that I buy off the shelf? Will we see a return of “Mac Edition” PCI devices? There are still a lot of unknowns, and guessing any further in advance is too difficult. The only thing that is certain, however, is that Apple processors coming to Mac is very much within arm’s reach.
(Under Construction, last updated: 06/06/20) Q: What is Nucleus Co-Op? A: https://www.youtube.com/watch?v=jbituCgu3Bc Nucleus Co-Op is a free and open source tool for Windows that allows split-screen play on many games that do not initially support it. The app was originally created by Lucas Assis, Zerofox later took over and added a ton of new features and improvements to support a lot more games. Ilyaki later joined in and brought multiple keyboards/mice support and more great features to the table. The app is currently being developed and updated by these devs: Lucas Assis, Zerofox and Ilyaki. R-mach too for making and supporting the website that hosts the Nucleus Co-Op scripts. Also the further development of the app wouldn't have been possible without all the amazing contributions and hard work from the SplitScreen Dreams Discord members (which include the devs mentioned above) that made all the new Nucleus Co-Op scripts and continue to make new discoveries and scripts to support even more games, among them: Talos91, PoundlandBacon, dr. old.boi, Pizzo and many more. Q: How does Nucleus Co-Op work? A: Essentially Nucleus Co-Op opens multiple instances of the same game (some games require mutex killing for that or other methods) that will only answer to one specific gamepad (we do this via Nucleus Co-Op custom xinput dlls or xinput plus dlls) and connects those instances via LAN or steamworks online multiplayer emulation (Goldberg Emulator), all while making sure all windows have focus so they can be playable with gamepads or that the instances are playable even in the background. Nucleus then resizes, removes borders and repositions the games windows so you can have synthetic splitscreen to play locally with your friends. Q: Which games can be splitscreened using Nucleus Co-Op? A: There are a lot of supported games, all mentioned in the list above. A ton of games are now supported thanks to the amazing program called Goldberg Emulator, developed by Mr. Goldberg, a big thank you to him. Read the Goldberg FAQ linked too if you want to know more. Q: Where do I download Nucleus Co-Op? A: You can download latest version from Github. Download the compiled .rar release, don't download the source code zip if you just want to use the app. Q: How do I use Nucleus Co-Op? A: Here is a quick video tutorial: https://www.youtube.com/watch?v=hWmvz59i-o0 1.- Download and exctract Nucleus Co-Op (extract using apps like 7-zip or winrar). 2.- Open NucleusCoop.exe. 3.- Click on Download Game Scripts, search for a game in the supported games list and download a script. You can also see all available scripts from the app now by pressing the view all option. 4.- Once the script has finished downloading you will get a prompt asking if you would like to add a game now, press yes if you want to add it now, if you select no proceed to step 6. 5.- Next you need to find where your game's executable is located. If you're not sure, try Googling 'where is (game) installed' and just searching for .exe in the place they tell you to look. For Steam games this is usually something along the lines of 'C:\Program Files\Steam\steamapps\common(game)'. Some games will have their real .exe stashed away in a folder called 'bin' or 'binaries' inside that place. Once you choose the right .exe, add the game. 6.- You can also automatically add games, click 'Auto-Search' and select the drive and path you want to add games from. 7.- Once your game is added, select it in the Nucleus UI and drag the gamepads icons to the splitscreen layout, click on the top-left icon on the layout corner to change the type of splitscreen layout. You can also right click a player in the layout to change the size. 8.- Finally press play and you are ready to go. Q: Where should I place the Nucleus Co-Op folder? A: Nucleus Co-Op can be placed almost anywhere(Documents, Downloads, Desktop, etc...) except inside the game files. Q: How do I play with an uneven amount of players (such as 3 players) without having an empty space? A: Right click on a section of the splitscreen layout Q: Nucleus Co-Op doesn't launch, how do I fix it? A: Here are a few things you can try: 1.- Try updating your Microsoft.net framework, and install/reinstall Visual C++ 2010-2017. 2.- Run Nucleus Co-Op as admin. 3.- Make sure your antivirus program is not blocking Nucleus Co-Op. 4.- Restart your PC, and try again. Q: I wish to help out with the project, how can I get in touch? A: Join the Nucleus Co-Op discord community or contact us here in the subreddit. Q: When support for X game? A: Not all games are easy to splitscreen, if you want to suggest a game make a post with the title [Request] Name of the game and provide useful information like if the game supports LAN or dedicated servers, if it is available on Steam or in other services, if it uses external servers for online etc. Also you can contact any of our experienced Nucleus scripters here or in the Nucleus Co-Op discord and ask if a script is possible. The main scripter is the OP of this post for instance. Remember that Scripters are limited by the games they own and can test on, so if you really want support for a game to be added consider donating the game to the scripter in question. Q: How do I know when a script gets updated? A: Scipt updates are always announced in the Nucleus Co-Op discord server in the channel script updates. Q: How do I create my own splitscreen script for Nucleus Co-Op? A: Here is the documentation, open the .js file with notepad to read it. You can also use the other scripts you download from Nucleus as reference, they get downloaded to the Nucleus scripts folder. If you create a working script or if you have any questions about Nucleus scripting you can ask us in the Nucleus Co-Op discord or here in the subreddit, we can help you improve your script so it is fully working for sharing with the community. Q: Does Nucleus Co-Op work on Linux/Mac? A: Nucleus Co-Op depends on a lot of Windows functions and APIs, at the moment it only works on Windows 7 and Up. If you are interested in porting Nucleus Co-Op to other operating systems please feel free to contact any of the developers. Q: Where can I report a bug/issue? A: Note that Nucleus Co-Op is a tool in development and still in Alpha. Expect bugs, glitches and weird things to happen. Help other people not have these things happen by checking for a solution here and submitting a [BUG REPORT] to the reddit as a new topic or in the comments here, if no-one else has brought it up. A good [BUG REPORT] looks like this: Thread name: [BUG REPORT] Simon falling off horse BUG: Simon falls off his horse. EXPECTED: Simon should not fall off his horse, right? CAUSE: I'm pretty sure it's because I have my computer plugged into an auto-blow. STEPS TO REPRODUCE 1.- Open up Simon Stays On His Horse: The Interactive Video Game of the Movie. 2.- Choose Co-Op and join with another player. 3.- Simon falls off his horse!!! TYPE: Severe! The gameplay can't continue if Simon isn't on his horse! (Alternatively, Minor if the gameplay can continue but it's just annoying) NUCLEUS OPTIONS: I played with 2 players using the vertical splitscreen (left and right) on one tv and 2 famicom controllers. I'm using the latest version SYSTEM: I'm on Windows 3.1 with 4MB of RAM, a 2KHz CPU and no graphics card, playing on a projector. She's a monster. I'd really like this to get fixed please thanks magic man! -Beanboy" Keep in mind most scripts are made and tested using the latest legit steam versions of the game, so provide information about what version of the game you have. Also provide a debug log of the NucleusCoop error, enable the debug log in Nucleus UI settings. You can also ask for support in our discord. Q: Why is Nucleus Co-Op resizing the game instances incorrectly/the instances look stretched? A: Try setting your monitor scale to 100% in your monitoTV resolution settings. It is also highly recommended that you add custom resolutions to all your monitors from your AMD/Nvidia/Intel panel (For example if you are using a monitor resolution of 1920x1080 add custom resolutions like 960x540, 1920x540, 960x1080, ect.) that way most games will be able to see and use those custom resolutions and the splitscreen will not look stretched(Example). Note that not all games support custom or widescreen resolutions. Also try disabling the Nucleus status bar in Nucleus UI settings. Q: Why is Nucleus Co-Op throwing an error message that it can not find a file when launching a script? A: A lot of scripts edit the game's .ini or .cfg files to force windowed and to adjust the game resolution, so make you sure you run your game at least once and change some graphic settings before running it via Nucleus Co-Op, that way you make sure the config files are getting generated first. If you are still getting the error after doing that, select the game in the UI, click on Game Options and select Delete UserProfile Config Path for all players. Also try disabling the Nucleus status bar in Nucleus UI settings. Q: Where are my Nucleus Co-Op save files located? A: Some scripts save to the Nucleus Co-Op enviroment folder located in C:\Users\YourUser\NucleusCoop, you can access each game save file via the Nucleus Co-Op UI too, select a game, click on Game Options and select Open UserProfile Save/Config Path. Other scripts just save in the same file path your regular game saves to. Q: Why are my in-game frames per second low/better in one instance than in the others when using Nucleus Co-Op? A: Remember that Nucleus Co-Op opens multiple instances of a game, so depending on the game this can be quite demanding for your PC, to improve FPS and performance try reducing graphics settings like textures and shadows, limit the FPS or unfocus all the game windows so that they get equal priority and the FPS even out, you can do this by Alt-Tabbing to a different window like the Nucleus app window, the game windows will still remain on top, you can also press the windows key+b in your keyboard to unfocus all instances. Q: My Playstation/generic PC controller isn't working/isn't being detected by Nucleus Co-Op, how do I fix it? A: Most Nucleus Co-Op Scripts only detect Xinput gamepads. Controllers that work best are Xbox 360, Xbox One and Logitech game controllers for minimum hassle. There are a few scripts that also support Direct Input gamepads but Xinput gamepads are generally easier to restrict to a specific game instance than Dinput gamepads. If you are using PS4 gamepads try the app DS4windows, look in the settings for an option called "hide ds4 controller" - make sure it's ticked. To ensure it's definitely running in exclusive mode make sure ds4windows is set to load on windows startup, then turn your controllers on while windows is loading. Download the latest version here - https://ryochan7.github.io/ds4windows-site/ If you are using generic dinput gamepads the app XOutput is also useful to emulate xinput gamepads. The app X360CE version 4 that creates virtual Xbox 360 Controllers inside your Windows operating system is also very useful to emulate xinput gamepads system wide. Remember that some games detect both dinput and xinput gamepads so even if you are emulating a xinput gamepad the input could still not be restricted correctly because the game is now responding to both the emulated xinput gamepad and to the native direct input of your gamepad, that is why some apps like DS4windows have an "exclusive mode". Also do not place x360ce xinput dlls in the Nucleus Co-Op files as this might interfere with Nucleus custom xinput dlls. If you are using steam controllers try this: https://www.youtube.com/watch?v=wy4F2eqTXQ4 Q: Why is my keyboard not showing in the Nucleus Co-Op UI? A: If a script is only showing gamepads and not keyboard icons that means the script only supports gamepads and doesn't support keyboards and mice in splitscreen yet. Q: There are many keyboards and mice icons in the UI, how do I know which ones to use? A: If you press a key in the keyboard you will use or move the mouse their corresponding icons in the Nucleus Co-Op UI will light up yellow. The app can detect keyboard macros that is why sometimes you will get multiple keyboard icons. Q: Can you play splitscreen+LAN in different PCs? A: Yes, if you run the game via Nucleus Co-Op in different PCs you can connect all instances you launch via LAN, for example you can have 2 players playing vertical splitscreen in one PC via Nucleus and connect to 2 others playing Nucleus splitscreen in a different PC via LAN. If the script uses steamworks multiplayer emulation you'll have to change the instances steam ids in the other PCs you'll connect to, otherwise the instances launched by Nucleus will use the same steam ids and won't be able to connect to each other. For that you can open the game script .js file in Nucleus scripts folder in the other PCs and add for example Game.PlayerSteamIDs = [ "76561198134585131","76561198131394153","76561198011792067","76561198043762785" ]; that will change the default ids of the first four instances you open in one PC via Nucleus Co-Op. Q: Does Nucleus Co-Op have any malware? A: Absolutely not. Q: This project is Amazing where can I donate? A: We don't have an unified donation platform yet but you can support the devs individually here: Zerofox, Ilyaki, Lucas Assis. You can also donate to our main scripters that make the game scripts for Nucleus: Talos91/blackman9
efficiency costs of purchase vs awakenings ideal ranks and their use in winning tournaments
https://preview.redd.it/govc8j6lwaw41.jpg?width=720&format=pjpg&auto=webp&s=b53ae8c35f697ea53d6d292ec05f434f29578784 blue = gem cost of initial purchase of a hero at that starting rank, vs. red = gem cost of the awakenings needed to get them to R6 (800*each rank) - together being 100% of their total cost. The "box"-looking effect is the proportion that each rank takes up irt its' relative cost - e.g., the blue boxes are always larger b/c they offer less efficiency at 1500/rank instead of the 800/rank for each awakening; and they are different sizes b/c the heroes have different total costs (i.e., so the awakenings take up a smaller or larger relative proportion of it). The black line is then the % of tokens that can be skipped when starting off with a hero at that starting rank, and the green dashed lines represent each successive rank above that, which are always the same regardless of a hero's starting rank: so a R0 hero starts off with none, but then at R1 is 3% of the way through, then at R2, R3, R4, R5, and R6 is 9, 18, 29, 53, and 1005 of the way through. Speaking of, I did not make another one for R7 though I could if there is interest - still, this should help get across the main points. And yes, I realize that there are no heroes that start at R3, or R6 (yet!), but it was easier to leave those in than to take them out. Also an accompanying table of other helpful numbers.
R3 (though no hero starts here)
R6 (doesn't exist - yet!?)
total gem cost:
remaining tokens to R6:
People keep asking questions about the "efficiency" of ranking up heroes for tournament usage, so I thought I would share this graph, in case it helps. For instance, did you realize that once you buy a hero that comes pre-awakened to R2, you've already spent nearly *half* of their total gem cost to fully awaken them to R6? (4 more awakenings*800 each=3200, vs. their 3000 price-tag) Although you start off only skipping 9% of the total tokens needed to get them there ((5+10)/(5+10+15+20+40+80)). Especially for newer players considering which heroes to buy and rank up to unlock worlds, these heroes offer fantastic utility for the campaign, Endless mode, and higher-difficulty RS situations. And then once bought, they offer the same efficiency as any other hero to finish off to their R6 for use in tournaments. It's a matter of preference to get one or several of them early and enjoy their use in the campaign, or to avoid their high cost and just awaken more inexpensive ones for faster, though more difficult progress. In contrast, the heroes that come pre-awakened to R5 are more expensive - but their purchase price represents 90% of their total gem costs, and they already have more than half the total tokens that would be needed to unlock their R6. In short, if you have the gems, it's way more efficient to purchase Yan or Narlax and then finish them off to their R6 than it is to start a new hero at R1 (although if you would have to save up the gems first, read this post instead: https://www.reddit.com/RealmDefenseTD/comments/g1mmg5/advice_about_awakening_existing_heroes_vs_buying/). (Also, do not buy Leif, at least not for the sake of tournaments, although he's great for campaign, and the #1 hero for RS, so especially good for getting a new event hero to higher rank.) As far as it pertains to "ideal" ranks (those below R6 that are worth pausing at, to win during a hero's week), that is something that many newer players want to know about, but don't quite realize that it's not necessarily for them just yet. But for those that are keen to know, read https://realm-defense-hero-legends-td.fandom.com/wiki/Meta#Season_11_Meta.2FAnalysis for the utility of heroes in Tournament settings, and https://realm-defense-hero-legends-td.fandom.com/wiki/Awakening_Tokens#Most_powerful_Ranks for the utility of each awakening, plus https://realm-defense-hero-legends-td.fandom.com/wiki/Heroes_overview for some additional commentary on hero ideal ranks. For instance, Smoulder has 2 of them, for different purposes: R4 for anti-flier stun situational effect, R5 for his own week that adds stun & a reduced cooldown, although really for his own week he's mostly R6-or-bust, but that depends heavily on the league & the lateness of the season. I should perhaps add that I've never had much luck with ideal ranks. They seem mostly to be useful in Diamond League, and then by Masters already they are no longer useful. That said, you should give each one careful thought, individually for each hero, b/c it can save you a TON of time from getting a hero's R6 when you didn't need it (yet). Perhaps the most (in-)famous example is Yan's R6, which she doesn't need on her own week (I've literally seen Gold- rather than Purple-outlined Yans among the *very* top scores of a league), probably b/c she doesn't have great skills to help turn her blessed stat boost into actual DAMAGE (being mainly a "support" hero, which she is good at), and especially if you don't even have Efrigid or Bolton yet to receive the synergy that her R6 talent would offer, then it is fairly useless. Though these things do tend to change over time - like Narlax's R6 also used to be unnecessary, until this past season (11) when on his own blessed week it became mandatory to pull several strong bosses. Also one of Hogan's ideal ranks used to be R3 iirc (when his R5 decreased rather than increased his attack speed, before it was switched), then last season it became R5 to keep him alive (also against a strong boss), and now this season it looks like his blessed week is strongly pushing even his R6? Oh yes, some heroes can't/shouldn't really be paused at all, like Lancelot who prior to R6 is flat-out replaceable even when blessed, but at R6 gains a STRONG anti-air utility that is absolutely mandatory to win that week (as in, if ANYONE else in your group has it, who isn't terribly unskilled, then you have little chance to get a higher score than them). Where the concept of "ideal ranks" is most helpful then, is when you already have (most of) the Meta, and are looking to win more reliably each week. Having a hero at an ideal rank may not be required to win in Gold League for instance, but it can be helpful to use that along the way while you work on other things too (like a second hero's ideal rank, or their R6, or even continuing on with the same hero, just holding back on the actual gem cost - btw strong shout-out thanks to lanclos for sharing with me most of what I know about ideal ranks:-). I suppose it may be like identifying potential resting spots while climbing a mountain - once you identify them you can either pause and rest at them, or else of course skip them and keep going, but either way they may be nice to at least plan to pass by during your ascent, just in case you find that you need them. SPEAKING OF, here are some additional thoughts on tournaments that might help in that regard, though first I'll have to cover some basics: a) there is an effect I call the "leading edge" whereby the earlier weeks in the season are the hardest. e.g., *this week* in Gold League is literally the hardest week that it will ever be in this season, b/c *this* is the week that it contains the most senior players (like former GMs). Then, next week, Platinum League will be created, and will be populated by the top 3 players from each group that managed to get promoted - which lets face it tends to be the most senior players, with the deepest hero investments and also the most experience & skill; and thus *that week* will be the hardest that Platinum will ever see, and so on in Diamond, and Masters, and...actually Legendary is special, b/c once a player reaches GM, they remain there. But the other leagues get easier the further the season goes, b/c of all the more senior players getting promoted each week. So therefore the last week of each season (prior to Legendary) is literally the easiest to get promoted in. There are some important modifiers to this, b/c it may be easy or hard in general but not for you b/c of the heroes you have, and also an effect where campers used to try to not get promoted so quickly, but then towards the end of the season get nervous and want to move upwards, but anyway, this is generally true. So when I say "in lower leagues, later in the season", what I mean is "further away from the leading edge". IN OTHER WORDS, the difficulty of Gold League on week #1 is nowhere NEAR the same difficulty as Gold League on week #15. On the other hand, Platinum League on week #2 is quite similar actually to the difficulty of Legendary League, anytime, b/c that is the league where at that time all the veterans are (with anything above Platinum not yet having been created). See what I mean? But b/c of this effect, any talk about "Gold League" or "Platinum League" must be merely an average of how difficult it is to win, which basically means mid-way away from the leading edge, although be aware of these variations where earlier means *much*-harder-than-average, and later means much easier. b) Gold League further is special in its' being so small, and in having players that haven't finished the campaign yet, which (vastly) increases the number of total players, and has the effect of "diluting" / spreading the veteran players out between/among the various groups. Therefore, even on week #1, its' difficulty is nowhere near as hard as Legendary League, b/c of being mitigated by this effect. Platinum on week #2 also isn't *quite* as hard as Legendary for similar reasons (the group size being 30 instead of 50; and effects like even former GMs lacking Hogan and not being promoted while others who have Hogan's R6 can do even better), but...Gold is truly special in being the easiest league to win in (aside from the non-repeatable Bronze and Silver of course). Though again, for people having trouble getting promoted from Gold League, take heart: as the season progresses it WILL get easier!:-) c) in Gold League, with Koi & Raida you can pretty much win by accident even w/o meteors (though this particular week requires Narlax too, and might even need meteors - though I have never used any to get out of Gold myself). This is b/c those heroes provide so much higher utility, compared to so MANY players that lack them, that you definitely have a good chance. And that chance keeps repeating every week, as it gets easier and easier later and later into the season, so if you don't get promoted one week, keep trying the next. The advice for players lacking Koi & Raida is the same: keep trying, and eventually you'll get into a group that lacks Koi, or perhaps someone who doesn't know how to use them yet, and you CAN win! And if you truly want to prioritize this aspect of the game, before you finish the campaign, get a hero to an ideal rank or even R6, and on their blessed week, if it's late enough, you'll have a VERY good shot (though perhaps also needing good generic heroes like Narlax and Leif, unless you get VERY lucky with your group placement, or outright R6 a few heroes for this purpose). d) in Platinum, it gets a bit harder. Though, if you have the Meta, not by that much. For those who have Raida & Koi, also pick up Yan, necro-Connie, Narlax, and Smoulder's R4 and you'll do fine in Platinum, even without the blessed hero (though of course, earlier in the season you may need them, while later you can get by without them, having strong generic+situational replacements). e) in Diamond, it gets harder still, where you start to need the blessed hero more often. Though not every week, and not necessarily at an "ideal"/pausing rank. Two seasons ago (while I was still R6ing Koi) I got promoted by having Obsidian, not at his ideal rank of R4 but just about level 20 and rank R2 - & even then he was replaceable with Efri's R6 (which I did not have) - though that was week #13 out of 15, so very late. Many other similar stories told by veteran players abound: Mabyn's R2, Helios's R4, and if you have Yan's R6, then also Efri's R4 & Bolton's R3, etc. f) in Masters, it is pretty much R6-or-bust, and so you are already past the stage where ideal ranks can help you for the most part (I mean Yan's R5 would probably still work, and Helios's R4 b/c towers don't add much to tournament situations, but...not much else). *If* you use the blessed hero at all, you probably need them all the way to R6. Though there are a few situations where a hero is outright replaceable - chiefly Sethos, Leif, and Masamune (possibly needing to be quite late in the season for that one), all of whom lack anti-air capabilities (though Masamune's R7 is going to change that!). g) that said, Masters League is still nowhere near as difficult as Legendary. Scores that would get you promoted out of Masters won't even get you a reward in Legendary (although THIS season looks to be changing that - thus encouraging promotion and concurrently discouraging camping in lower leagues - definitely a plus for both veteran and more junior players alike!). Also, for the most part you can get by without the whole entire cast of "situational" heroes that are needed in Legendary, to win a GM. What I mean is: when veteran players have ALL the heroes to choose from, and they are all at R6, they can find the absolute BEST one for any given week - which could be Efri, Mabyn, Azura, Caldera, Connie, Helios, Shamiko, Narlax, Smoulder, etc., and if you want to get a GM, you need to have whatever it is that week that is among the BEST. While in Masters, you most often don't - so actually, R6-or-bust isn't that hard to do, at least compared to Legendary where you need both the blessed hero that week AND one of a large(-ish?) cast of situational heroes, and of course their R6 as well. h) an argument against ideal ranks is that it may spread out your hero investments too thin to let you win many weeks. On the other hand, an argument for it is that even having a hero's R6 doesn't guarantee a win (e.g., at first I was absolutely terrible at using Narlax - and still I have yet to ever win a week where he is blessed). Also aiming for ideal ranks lets you maximize your elixir income (https://realm-defense-hero-legends-td.fandom.com/wiki/Realm_Siege_Strategies). Though an R6 hero also offers the option to use that hero even when not blessed (and yet this works better for some heroes and not so well for others - e.g., Mabyn can perhaps win at R2 in Diamond, but as a situational hero needs her 5th meteor talent gained at R6 to truly be effective; while Bolton + Obsidian are mostly only used when blessed, and never outside of that - although this week may again be revealing that the devs may change that in the future!). Therefore there are many benefits to either using, or not using, ideal ranks. Ultimately whether you want to pause at an ideal rank, or keep going all the way to R6 for every hero that you own, seems to be a matter of personal preference: how EXTREME of a personality are you? Do you want to work on increasing your MAXIMUM power, to possibly win a GM title sooner - but also maybe fail to even reach Legendary League at all, as a more junior player, and also have little chance at all on weeks that you lack the blessed hero (at least in Diamond League, or others earlier in the season, closer to the seasonal reset - i.e., take a risk, and maybe be #1 on the weeks you've prepared heavily for, but then score very low on (many of) those you've invested literally nothing into? Or do you prefer to aim for a more AVERAGE level of power, which may leave you unable to be promoted on a given week (maybe several of them), but yet still maybe get some rewards, not being the best but neither being the worst, and yet still get practice either way, and maybe win sooner with less of a hero investment needed into a particular week, leaving you free to focus your efforts elsewhere? Like most things in life, the ideal path is probably somewhere between the most extreme of R6ing one hero before moving on to the next, vs. having all heroes at ideal ranks but none at R6. Though there are people who have pursued each of those strategies! (and I can tell you some of their names if you want:-) Ultimately you need 4 wins to get to Legendary League, and then at least 1 more if you want a GM title that season. So pick a few heroes to get to R6, another few to get to ideal ranks, and with that collection you'll do well. Another hint: do you want your strongest hero investments to be earlier in the season, in your lowest league, or later, in the highest? Watch the https://realm-defense-hero-legends-td.fandom.com/wiki/Blessed_Heroes_-_Tournament page to see how early a hero is blessed in the last few seasons, and pick one that will likely be blessed later rather than earlier, and then aim to buy that hero and work on increasing their power. e.g., Yan and Narlax are both in the Meta, and blessed mid-to-late-season. Also there are a TON of other helpful tips - about towers, heroes and synergies and combo moves, and many other tournament topics on https://realm-defense-hero-legends-td.fandom.com/wiki/Tournament_Basic_Info and https://realm-defense-hero-legends-td.fandom.com/wiki/Tournament_Detailed_Strategy. So now all that's left is for me to wish you good luck!:-) Edit: while I thought about adding these couple of thoughts before, they didn't specifically touch on ideal ranks, so I left them out. But so many are asking so I'll put them in after all... i) there are 3 hero roles to fulfill each week: generic, situational, and blessed. If you want to think about it harder it's "really" 2 situational and 1 blessed, but since right now one of those slots is nearly ALWAYS Koi, the former formula is at least a nice way to think about/remember it. generic: especially if you lack blessed heroes and/or Koi, this is about all you've got - so use it! When you get to W3, Helios or Sethos can work, to help get you promoted from Gold - though you shouldn't get them just for this purpose (it is terribly inefficient to buy new heroes all the time when you can awaken earlier ones for nearly half the cost, though that takes TIME so this is a strategy mainly for P2W players). In W4, Yan and especially Narlax can get you promoted in Platinum (though again, don't buy JUST for this short-term purpose), and in W5, Leif/Caldera can get you promoted even as high as Diamond (later in the season). I doubt that any of these can get you promoted in Masters, and if anyone ever actuall DID that, they should count their lucky stars, but it's not something that you should "expect" to happen. Once you get Raida and Koi though, you'll never use these other heroes for their "generic" utility again. situational: usually there is some hero / class of heroes that will work best for a given week. Otherwise, for example, if the only powerful heroes you have are Koi & Leif, then every week you'll always bring them, for their *generic* power. But Leif hardly does anything against fliers - merely blessing towers which, while that work GREAT in RS on blessed tower spots, is virtually useless in tournament situations. Instead, if you brought Smoulder, especially with his R4 anti-flier slow-down talent, then you have a *much* better defense & offense against fliers, even though Smoulder seems to offer FAR less "generic" power than Leif - but even thoug it is "less", it is "more" appropriate to the *situation* - see? So for a level lacking fliers entirely, Leif would be better, although for a sitaution where fliers are the ones ending your tournament play, Smoulder can be a huge boon. Also, sometimes situational utility can (nearly or even completely) win out over generic or blessed heroes! An example is where on Sethos or Leif's blessed week, a team of strong anti-flier utility can relatively easily get scores as good as or better than a team including the blessed hero at R6 (though skill also plays a role of course). Lancelot prior to his R5 is also replaceable, and Masamune even at his R6 is *somewhat* so (if it's not a binary yes/no, but rather a continuum, where his R6 provides *one* route to win, but a strong anti-air team is *another* way, which even though offers less power, and so can't win a GM, is offered at much greater efficiency and may let you get high rewards or even promoted with from Masters League). -) anti-fliers: Raida, Smoulder, Connie are enough to get you started, then later you'll want to add Helios & Azura. Each offers something different - like Smoulder slows them down, Raida stuns them, Connie does both, Azura can charm up to 4 (good for when there are more rare but tanky ones like W3 crows), and others can be good too like Efrigid also slows them, Narlax pulls them back, etc. The Narlax+Raida pull+charge/stun combo is ESPECIALLY powerful (read more at https://realm-defense-hero-legends-td.fandom.com/wiki/Tournament_Detailed_Strategy#Narlax_.2B_CC_combo). Note that while Fee is tremendous for fliers in campaign, she can't really keep up in this mode, except when she's blessed. -) bosses: Koi & even Raida (& Leif if you got him for other reasons) can tank fairly well at first. Connie's bunny mamma does even better, and her little bunnies help slow it down. Narlax at his R6 can pull them back. Later, you'll want Azura who can help charm an enemy to use as a tank against the boss, and then there is Caldera who is immune to all physical damage, but extremely vulnerable to magic. Although the latter two are rarely blessed themselves, and often aren't as worth bringing as the blessed hero. If you are just getting started, Fee (at any rank) may actually work surprisingly well, as her wolves can delay a boss somewhat as it pauses to kill them. -) delay: Connie, Raida to stun, Narlax to pull back, Efrigid to slow/freeze; or for just a few enemies that get past a checkpoint, Yan to teleport, or Mabyn for fear. -) worlds: Mabyn works REALLY well for W3, to send enemies back whereupon the archer-bots can regain control of the situation after being broken through. Azura works really well for W4 since she is immune to the slow effect, can heal to help counteract all the ranged damage being thrown at you, and can charm strong enemies - like an armored tank to use against a boss, or a strong flier to use against other strong or weak ones, etc. Caldera isn't good in W3 (poison) or W4 (magic), but is very effective in W1,2,&5. -) synergies: these can be stronger than anything else (yes even than Koi - in fact this is the ONLY reason why you might not want to use Koi if you have him) - basically you either have the synergy partners or you lose that week (except *maybe* in Gold?). Efri & Bolton need both Yan & Koi's R6, Bolton & Obsidian need each other, Fee needs at least 1-2 of her synergy partners, and Smoulder needs his R6 + Narlax to in. Read the wiki for more comprehensive details. Note that every one of Leif's synergies is absolutely useless and *never* worth bringing him along, unless you are a more junior player and lack anything better to do (hint: it might help once or twice, but it's REALLY not worth getting those 80 tokens and spending 800 gems to get his R6 - that should be one of it not literally THE last thing you do in the game; unless the devs change that soon? I personally would LOVE to see that!:-). -) special mention 1: Yan hastes Koi, and is thus used more often than any other hero, after Koi himself. She can do this at her R5 though - no need to get her R6 until you are ready to take advantage of her 2 synergies. -) special mention 2: Raida's extremely high generic utility (2nd only to Koi), AND his high situational utility (for fliers, stunning & damaging bosses, large CC, etc.) makes him the top #1 all-around utility / situational hero...though only providing a very "average" level whenever you lack some other hero who can provide a higher MAXIMUM power. When you have literally every other hero in the game, and to their R6, then you may never use Raida again (though even that's not quite true - players often use him in their first try at a level, to be ready for anything, even though he is always replaced with someone better to get the final maximum score), but until you invest that deeply (which will take YEARS of your life), Raida can provide a great deal of help. *Especially* on the days where you lack the blessed hero, though that is more of a generic functionality, and yet also when you lack the top situational hero for that week (Azura?). Use him as a stepping-stone. blessed heroes: there is no getting around the fact that you need the blessed heroes to have the best chance to win on a given week. Especially by Masters League, though of course they still help a LOT to win more often in Gold, Platinum, and Diamond. Until then, strong generic+situational utility can help fill in - some heroes are more replaceable than others as mentioned above - but after you get the Meta (Koi, Raida, Connie, Yan, Narlax), then you need to decide whether to prioritize more situational heroes, or more blessed heroes. Both ways work, and you probably want to split your efforts b/t the two. Often heroes work for both: e.g. Narlax is blessed every season, usually fairly late, and then last season (11) was also used another 4 times. In contrast, heroes like Fee, Lancelot, and Masamune are only ever used once, on their blessed week. But still, you only need 4 wins to get to Legendary, and especially if you already had these heroes at a high rank to help you unlock worlds in campaign, they can be a GREAT way to win, certainly much easier than trying to win with purely generic+situational utility that doesn't match what is needed on a given hero's blessed week. One tip: pick a hero that you like to work with, and get them to R6 - you'll likely do better with them than you would with some other hero that you don't enjoy as much. j) R7 heroes and future predictions: many people, myself included, think that R7 will mostly be necessary for winning GMs. Thus, R6 becomes another ideal/pausing rank, though this one useful to win Masters League with. Many people want to know whether they "should" get an additional hero to R6, or focus that time instead to continue on to R7, though again this is up to your personal preference - do you want to win more often, though possibly not at the #1 spot and maybe not get promoted but do get rewards, so aiming for a higher "average" utility, or do you want to take a risk for a chance to get a GM, and aim for "maximum" utility instead (at the cost of being farther behind in terms of having fewer heroes to use whenever they are blessed)? It's a GAME, so go for what YOU want!:-) TLDR: use ideal/pausing ranks for heroes blessed earlier in the season, and instead put your highest investments into heroes blessed later, where you'll need their power the most.
Howdhee-ho everyone! So the other day I did a ranking of all the Showtime attacks. I’d said that if it got a bit of attention and people seemed interested in this kind of stuff, I’d do rankings for other Persona 5 bits. So today I thought I’d explore Palaces. Now, this one is going to be a bit lengthy because Palaces have a lot to talk about. And for the usual disclaimer; Spoilers ahead! And everything from here is just my own take on it. If you feel differently, awesome! I’d love to hear your thoughts as well! So, here are the main criteria I’m basing this stuff on. “Story” - Now, this isn’t a plot review, but rather a review of how the Palace feels in relation to the story. Essentially, how well does this Palace fit, and does it make sense for the ruler? “Creativeness” - How creative does the Palace feel? “Gimmicks” - Puzzles, areas, things like that. Are they good? Do they fit thematically? “Atmosphere” - From design, to enemies, to music. How does it feel? Does it match the tone of the current arc? “Length” - This is not necessarily “how long is the Palace” but rather “How long does it FEEL”. Does it drag on? Does it feel too short? Also, I will NOT be including major bosses as part of the Palace. I’ll be covering bosses another day! So without further ado… let’s dive right in with what I feel is the worst Palace. And I don’t think this one will be a very hot take. #9 - Okumura’s Big Bang Death Star Yikes Alright. I’m gonna tackle this one at a time, just going down the criteria list. So to start with the story, I don’t think that a space station makes sense, because thematically it’s a bit… odd. Realistically, the whole “point” of Okumura’s arc is that he wants to “Ascend to the political world”. And you uh… can’t ascend much further than outer space. I think they could have gotten the same general idea with the Palace being something like a NASA Headquarters. Then you still get the space feeling, and the concept of “escaping to Utopia”. I’ll admit this one is a bit of a nitpick. But it’s always been a nagging issue for me. Now, this is a pretty creative design for a Palace. A giant space station with faceless, robotic drones sacrificing themselves for their leader. It screams of Star Wars with the Stormtroopers just letting themselves get ripped apart for Palpy and Vader. And honestly I remember feeling this sort of overwhelming sense of wonder as I walked into the Palace for the first time and saw SPACE sprawled out in front of me. It’s cool. Now, here’s where the problems come in. The gimmicks. Not only are they not good, but GODS ABOVE they are repetitive. First there’s the “robot interrogation” section. Try to find the highest ranking robot. But first you need to go through all the ranks below him. If I wanted to be sent up a chain of command until I talked to someone who is actually useful, I’d call up tech support. And fun fact, calling tech support is awful and nobody does it for fun. Well, except apparently the person who designed this “puzzle”. Then we have the breaking arms and lunchtime puzzles which are just… build a bridge here, hit the button, sprint across to the new bridge, make another bridge, run back to the third bridge. I dunno. It’s very uninspired. And then we have the airlocks. Or as I like to call it, wasted potential. This puzzle COULD HAVE BEEN great. But they made it so overly complex and so long that it gets grating. Now, for the atmosphere. Honestly, I think this Palace does atmosphere very well (which is ironic since it’s in space). But it really gives the idea of a ruthless, corporate conglomerate. And while I think the music is one of the worst tracks in the game, it really does fit here. It’s tedious, repetitive, and droning. Just like working in fast food (and being in this Palace). And length. Yeah. It’s long. Probably the longest Palace. It definitely feels like it. So yeah. This Palace is kind of not great. #8 - Kaneshiro in the House from Disney/Pixar’sUp Now, I don’t want people to think I hate this Palace. Because I don’t. But I do find it to be one of the more bland ones. It’s just kind of… uninspired. Eh. I’ll get more into it below. So as far as the story goes it makes sense but… there isn’t a lot TO Kaneshiro. Like, he’s a guy who likes robbing people. We never get to know him beyond that. So a bank is kind of the only option. So it makes sense because well… nothing else would as far as we know. And unfortunately, this impacts how creative the Palace is. It’s cool that it’s flying, but the flight part is a little… irrelevant. Once you’re in the bank it’s just kind of… a bank. Like, there’s nothing really unique or cool about it. It’s a bank. All of it. The whole thing is just a normal, run of the mill bank once you’re inside. Well… except the money pit. Which is a full like 5 minutes of the Palace so ya’know. Now, for the Gimmicks. There is one. One singular gimmick. And I don’t really like it. Kaneshiro’s bank has the “letter math”. Basically he has a bunch of notes with things like D=1, U=2, M=3, and B=4. Then you go to a panel with the word DUMB on it and put in the code 1234 (sounds like something an idiot would put on his luggage). So yeah. It… certainly exists. Now I will say, I do like the atmosphere. And the BGM is, as the kids say, “A bop”. I’d say it’s the… fourth best Palace track. And the Palace DOES really feel like a bank. It’s heavily guarded, and you really get the feeling of “I don’t belong here” after you pass the main room. This is the only Palace that really made me feel like I was trespassing somewhere I wasn’t welcomed. And if you’ve ever been anywhere in a bank that isn’t the main hall, I’m sure you get the feeling. And the basement level does give me that sort of “bank heist” vibe. Now, I don’t know how long this Palace is. But it certainly feels long. I think most of this is the basement level. Once you get to the lettenumber puzzle it feels kind of like it starts dragging. So yeah. This Palace is… it’s okay. It’s not good. It’s not bad. It just kinda exists. #7 - S.S. Shido I don’t know how controversial this one will be. But I don’t really enjoy this Palace all that much. It gets REALLY old REALLY quickly. But it does have some merits. Firstly, the Ship idea makes a lot of sense. Especially after Haru just goes “Here’s the metaphor!” in case the player doesn’t get it. Yeah, it makes sense that Shido has a giant cruise liner filled with only the elite as the country around him collapses. Plus, he does talk about “steering the country” more often than Ryuji says “FOR REAL?!” … okay. Maybe that’s not factual. But you get my point. Now I will say, this Palace is very creative. The idea of a giant Ship cutting through buildings is cool. And I like how it’s treated as a cruise liner because it allows for a lot of additional areas, like the pool restaurant, and obviously the usual ship bits. Now for the gimmicks… there is one. It’s the rat puzzle. And it can go fuck itself. Thank you for coming to my TED talk. Now for the atmosphere. It feels perfect. The Palace itself feels grand, powerful, and intimidating, and the score accompanying it amplifies that feeling by quite a lot. I think it’s a bit of a step down from other Palaces, but it certainly makes sense and really works in regards to Shido. As for length… holy hell this Palace is long. Both literally and mentally. It has basically 5 mini levels, really annoying and long puzzles, and a whole game’s worth of dialogue. I get that they have a lot of loose ends to wrap up but ye gods this Palace feels like it takes an eternity to beat. This Palace is the textbook definition of wasted potential. It could have been amazing. It has all the pieces it needed to be. But they squander them by diluting the palace with annoying puzzles and WAY too much tangentially-related plot stuff. #6 - King Kamoshida’s Crazy Castle Now, I know that I have this one at 6th. But that isn’t a bad thing. I personally think this is the first “good” palace. It’s nothing amazing or crazy, but for the first Palace it’s nice and fun. Obviously the Castle aesthetic works with Kamoshida. It makes a lot of sense seeing how he lords his power over everyone in the school. Even Principal Eggman gives in to him. So an idea of him lording over everyone obviously makes a lot of sense. And a bit of a fun fact, the guards in his Palace have the same voices as the other teachers. And the big Castle is actually pretty creative. For a first Palace it really sets a tone, and standard for other Palaces to follow. It’s grand, absurd, and completely disgusting. Makes sense for something formed from distorted desires. There are also some really cool areas like the chandelier hopping, and the crazy, distorted upper floors. Now for gimmicks. They’re kind of simple. The two present are the book ones, where you need to place the proper book in the proper section, and the one where you need to kill enemies to get the eyes for the statue. Neither are particularly hard, or particularly inspired. They aren’t bad though. And they aren’t overly-long. They’re standard RPG trope puzzles. Now the atmosphere is kind of… strange. Honestly, I find it hard to take this Palace seriously. The BGM sounds like something out of a 70’s porno, and the Palace itself honestly feels like 70’s porn meets Dungeons and Dragons. It doesn’t really fit the story content of the outside world. It doesn’t reflect Kamoshida’s abuse or Shiho’s suicide. It feels a little too silly. I still like the aesthetic, but I don’t think it really fits with the plot. It needed to be more serious. And this Palace, unfortunately, does start to drag. By the time you reach the messed up, hyper distorted floors where the floor tiles are floating around, the Palace is getting a bit old. Though this could be due to the fact that you don’t really get to make any progress during your first like… four visits. Overall, it’s a solid Palace, and a great starting point. #5 - Madarame’s Museum (I couldn’t think of a creative name for this one. I’m sorry.) I really like this one. It’s fantastic. And I realize saying that for the 5th ranked Palace is kind of weird, but honestly I think that’s just a testament to how great the next four are. Starting off like normal, this Palace makes a lot of sense… but I always found it odd that his distortion is a Museum. Because like… that isn’t exactly unusual. He’s a renowned artist with a ton of very famous works. I feel like he has art in museums. I mean, we’re introduced to him at an exhibit. I dunno. It’s a nitpicky issue that I don’t want to press. Regardless, it obviously makes sense. And I love how all the paintings in here are sort of distorted in their own way to show how Madarame has to change his own cognition to accept his art as his own. And uh… yeah. This Palace is creative as hell. Sure, at first it feels like a normal museum. But stuff like the weird golden staircase abyss, the awesome courtyard, and the painting puzzles are so cool. Speaking of the painting puzzles. There are two major puzzles here. The painting ones where you enter paintings Mario 64 style, and the Sayuri puzzle. The one where you enter the paintings is kind of cool, because ultimately it’s about remembering the path that works, while also unlocking other paths to take and figuring out which path will let you escape. It’s cool, and brief, but a little TOO easy. Then there’s the Sayuri puzzle which I love. Basically you are presented with a few different paintings. All the Sayuri, but with slightly different modifications. And you need to pick the “real” one. I like this because it tests how well you were paying attention. They start off obvious, but the differences get more and more subtle as it goes on. It’s a great gimmick. As far as the atmosphere goes, this place is great. Not only does it match the overall feeling of an art museum, but it honestly has this sort of tenseness to it. I can’t really describe it, but it almost feels ominous. And I think that fits given that Madarame himself is a rather ominous figure. We know he’s bad, but we can’t really prove it for most of the arc. And I think this Palace has a perfect length. It doesn’t feel rushed or like it’s dragging, and I think that’s more because of the physical length. It isn’t an overly long Palace as far as playtime goes. So yeah. This one is pretty damn good. I like it. #4 - Sae’s Controversial Casino Yeah. This one is going to piss people off. I know that a LOT of people have this as their favorite Palace. And I can understand why. But it has a few issues that sort of drag it down for me. They don’t drag it down MUCH, but they keep it from getting any higher on my list. Obviously, the Palace makes sense as far as the story is concerned. Sae sees her job as essentially rigged gambling. Anyone outside “the system” thinks they can win, but in reality it’s not possible. As such, everything in her Palace is rigged to make it unwinnable. Or it SHOULD be. But we have a Futaba. So we get to cheat too. “Mwehehe”. Honestly, the casino and premise is very creative. The concept of a Casino full of rigged games that you need to unrig is awesome, and the layout and mission is great. Also, I love how they have it set up so Sae actively wants you to try to reach her. It’s incredibly unique as far as that goes. Now for gimmicks. There’s really only one, because most of the time you’re either walking around or killing things. And this gimmick… kind of sucks to be honest. I’m talking about the House of Darkness. It’s the only part that is more than a cutscene, standard area, or standart fight. But all it is is a standard area you can’t see. And it sort of sucks. It’s really… boring. And kind of lengthy. It’s pretty bad. As far as the atmosphere goes it uh… well, it certainly feels like a Casino. And Sae’s presence throughout makes it feel much like how the plot does outside. Sae and the SIU are closing in, rigging the game and challenging you to take the fight to them. It’s great, and I love the plot elements here. And now onto my major gripe. The length. This is definitely the shortest Palace. And it feels short half of the time. The problem is that the parts that DON’T feel short are painfully bad, and feel painfully long. I’m talking mostly about the Dice Game, and the House of Darkness. As I just said, the House of Darkness is little more than some dark corridors. And unfortunately, the Dice Game is the same, but without the darkness. There’s no real “Game” to this Casino. It’s just a bunch of drab, grey hallways that feel like a nuisance to traverse. It sucks when what you WANT is to get to the good Casino shenanigans (like the Arena) but instead have… this stuff. It makes the Palace feel like it drags, even though it’s probably the shortest one. So yeah. I still love this Palace but it has some glaring issues that I can’t overlook. #3 - Lil Sister’s Big Pyramid God I love this Palace. Much like with my Showtime list, I honestly think I could lump my top 3 all in as my “Favorite Palace” but for the sake of this I did want to try to dive into this on a deeper level. I’ll admit, too, that from here on a lot of these placements are more on gut feeling. Anyway, to start off, this one works incredibly well as far as story. Throughout the entire Palace we see Futaba go back and forth between wanting help and rejecting help. Her shadow knows we’re busting in from day one and follows us around just like Sae does. But due to her desire to push people away, we are constantly fighting an uphill battle against her to save her, even though she wants us to save her. And the fact that her Palace is a pyramid out in the middle of the desert is awesome symbolism for how Futaba’s position is. She hates the idea of being near other people, so she locks herself away. Now, I personally think this Palace is super creative. It has a nice blend of ancient Egypt with the pyramid, but also ultra-modern tech stuff. Random flecks of data appearing all around, mechanical traps, and the room before the boss which is basically a massive data stream with floating hunks of pyramid floor in it. It’s just so cool. It’s a combination of ancient and modern that shouldn’t work, but does. As for gimmicks, there are three major ones here and I think they’re all great. Firstly are the Anubis puzzles. These are pretty simple, but the gist is you grab an orb from one statue and need to put it in another. However taking them blocks off certain paths. It’s not super hard. But I like it. Next, there is the binary puzzle. Again, fairly simple. There’s a red column and a blue one, and you need to put in certain binary codes in these columns to unlock certain doors. Finally, there’re the picture puzzles. And honestly I love these. You come to a mural of something important to Futaba’s life and you need to rearrange them to make the picture “correct”. I love it because the scrambled appearance is symbolic of Futaba’s distorted view of these events. And they get harder as you do more, but never overly hard. It’s just a quick, fun mini-game. As for atmosphere, I think it does a great job of showing the isolation, desperation, and mistrust Futaba feels. The music score (my 3rd favorite Palace theme) is absolutely amazing and the wailing guitar helps to show the pain in Futaba’s heart. And while this one is lengthy, it never feels overly long or overly short. It changes up the pace enough to feel fresh, and doesn’t overuse the elements it has. So as you can see, I have no problems with this Palace. Only things I like. Which is why Placing these top three was so hard for me. But I think the things I like in the other two I happen to like more. #2 - The Public’s Prison. Memes and Mentos. Now, Mementos itself is kinda bleh. We all know this. But the Depths of Mementos, the Prison of Regression, is absolutely incredible. And I KNOW this one is going to be controversial as hell. But I can’t help it. I love this Palace. It’s so good. To start with, obviously this one works with the story outside because… well… it’s the one most linked to the outside plot. This is about every single person in the world being unwilling to commit and plot their own lives. And this place thematically matches. It’s a prison, because every person sees themselves as a prisoner. And the creativeness levels are off the charts. Sure, they could have gone with a stereotypical “hell” level but they didn’t. It’s a prison of almost alien design. It’s the kind of weird, off the wall evil that I’d expect to see in Mass Effect. Like I could see the Reapers living in the Prison of Regression while they wait for the next cycle. It’s just so damn cool looking. I love this place. It’s so menacingly malevolent without beating you over the head with the horror it holds. Plus the post-fusion part in the second half is so wild and insane looking. It looks like something I’d expect to see in Doom. The Gimmicks are also great. While there’s only one real Gimmick, it’s a fun one. A puzzle where you need to light up tiles on the floor. The first one is a gimme. But they increase in difficulty to hilariously easy, to you actually needing to complete other puzzles first in order to do the one necessary to progress. I already sort of touched on this with the creative part, but the atmosphere of just existential dread this place holds is immense. And the BGM, Freedom and Security (my personal favorite Palace theme) really hammers that home. It has an eerie, ominous feeling to it that really works well in tandem with the rest of the level. And as I mentioned above, tt flips from being dreadful and terrifying, to having our heroes triumphantly running up a staircase of bones, destroying Yaldy’s minions as they march on to kick his ass like Doom Guy sprinting through Hell to kill a big boss demon. Finally, it’s a perfect length. Not overly long, but not short either. And the plot elements halfway through give a nice breather and tone shift before thrusting you into the awesome second half as you climb up to the Grail’s chamber. If I had to give a reason why this one is in second place, it’s that the second half is a little too focused on being cinematically badass that it foregoes exploration in exchange for a linear path. And while it works well, I still prefer the first half of the Palace. #1 - Dr. Snack’s Hospital of Happiness Here it is folks. My Number one. I don’t think this one will be as controversial as some of the others. But even so. Here we are! So to start, obviously this Palace makes a ton of sense for Maruki. He was intended to get a research lab built in the spot where this Palace forms, and the Palace IS a research lab. So obviously that works. And the whole concept was about using cognition to change people’s lives for the better. We can see this in the Palace during the quiz section where we see how Maruki guides patients to his happiness. Which is thematically nice because it shows that while Maruki claims he wants everyone to be happy with their desires, he actually wants them happy with his. Anyway, I’m rambling. The Palace is great as far as story and makes sense for the character. And yeah. This place is creative as hell. It’s not just a research lab. It’s a massive spire with rainbow bridges, massive telescopes, and a dome on top meant to represent heaven since Maruki sees himself as God. It’s the most grandiose, over the top thing in this game. And I’ll remind you, in this game you shoot a God in the face with a sword gun. *ahem* anyway. The gimmicks here are really damn good. The first thing is the awesome Quiz section. I do think it’s a little bogged down by the whole “The team must meet and discuss” part, but I love how this whole thing is just “How well do you know Maruki?”. If you know him well, you get a reward. If you don’t, you get punished. Then there’s the color bridge section which is just “If the Okumura space tunnels didn’t suck”. It’s so good because it requires a lot more strategy and a lot less luck than the Okumura port. And if you make a mistake it’s a much easier fix. The atmosphere is amazing too. The sterile but obviously corrupted first bit when you’re in the main building feels very clinical. But the strange bits of oddities really gives off an other-worldly vibe. Remember how I said the Prison of Regression felt like it had Mass Effect vibes? This part has like… Resident Evil vibes. It’s like a modern hospital tainted by an otherworldly monstrosity and it’s awesome (and, actually, not far from the truth. Much love, Azathoth.) Oh, and the BGM is my 2nd favorite. I fucking adore Gentle Madman. As for the length, I do think it’s probably the longest Palace. It definitely comes close with Okumura. The difference is you’re actually forced out about a third of the way through and, if you’re playing “optimally”, you won’t be back for a bit. So it never feels like it gets old or tired. And it changes up often enough, and with drastic enough changes that it never drags on like the bottom three Palaces on this list. So it’s great. GOD DAMN I LOVE THIS PALACE. Aaaaanyway. That’s my list. I’m thinking I’ll do bosses next, but I dunno. What would you guys want a massive rank essay on? Bosses? Awakenings? Phantom Thief members? Party Personas? And what are your thoughts on this here list? How would you rank the Palaces? I hope you all enjoyed this, and I look forward to hearing your opinions in the comments!
Invisible Object Culling In Quake Related Engines (REVISED)
Prologue Despite all these great achievements in video cards development and the sworn assurances of developers about drawing 2 to 3 million polygons on screen without a significant FPS drop, it’s not all that rosy in reality. It depends on methods of rendering, on the number of involved textures and on the complexity and number of involved shaders. So even if all this really does ultimately lead to high performance, it only happens in the demos that developerss themselves kindly offer. In these demos, some "spherical dragons in vacuum" made of a good hundred thousand polygons are drawn very quickly indeed. However, the real ingame situation for some reason never looks like this funny dragon from a demo, and as a result many comrades abandon the development of their "Crysis killer" as soon as they can render a single room with a couple of light sources, because for some reason FPS in this room fluctuate around 40-60 even on their 8800GTS and upon creating second room it drops to a whopping 20. Of course with problems like this, it would be incorrect to say how things aren’t that bad and how the trouble of such developers are purely in their absence of correctly implemented culling, and how it is time for them to read this article. But for those who have already overcome “the first room syndrome" and tried to draw – inferior though, but, anyway - the world, this problem really is relevant. However, it should be borne in mind that QUAKE, written in ancient times, was designed for levels of a “corridor" kind exclusively; therefore methods of clipping discussed in this article are not applicable to landscapes, such as ones from STALKER or Crysis, since completely different methods work there, whose analysis is beyond the scope of this article. Meanwhile we’ll talk about the classic corridor approach to mapping and the effective clipping of invisible surfaces, as well as clipping of entire objects.
The paper tree of baloon leaves
As you probably know, QUAKE uses BSP, Binary Spacing Partition tree. This is a space indexing algorithm, and BSP itself doesn’t care if the space is open or closed, it doesn’t even care if the map is sealed, it can be anything. BSP implies the division of a three-dimensional object into a certain number of secant planes called "the branches" or "the nodes" and volumetric areas or rooms called "the leaves". The names are confusing as you can see. In QUAKE / QUAKE2 the branches usually contain information about the surfaces that this branch contain, and the leaves are an empty space, not filled with nothing. Although sometimes leaves may contain water for example (in a form of a variable that indicates, specifically, that we’ve got water in this leaf). Also, the leaf contains a pointer to the data of potential visibility (Potentially Visible Set, PVS) and a list of all surfaces that are marked as being visible from this leaf. Actually the approach itself implies that we are able to draw our world however we prefer, either using leaves only or using branches only. This is especially noticeable in different versions of QUAKE: for example, in QUAKE1 in a leaf we just mark our surfaces as visible and then we also sequentially go through all the surfaces visible from a particular branch, assembling chains of surfaces to draw them later. But in QUAKE3, we can accumulate visible surfaces no sooner than we’ll get into the leaf itself. In QUAKE and QUAKE2, all surfaces must lie on the node, which is why the BSP tree grows rather quickly, but in exchange this makes it possible to trace these surfaces by simply moving around the tree, not wasting time to check each surface separately, which affects the speed of the tracer positively. Because of this, unique surface is linked to each node (the original surface is divided into several if necessary) so in the nodes we always have what is known to be visible beforehand, and therefore we can perform a recursive search on the tree using the BBox pyramid of frustum as a direction of our movement along the BSP tree (SV_RecursiveWorldNode function). In QUAKE3, the tree was simplified and it tries to avoid geometry cuts as much as possible (a BSP tree is not even obliged to cut geometry, such cuts are but a matter of optimality of such a tree). And surfaces in QUAKE3 do not lie on the node because patches and triangle models lie there instead. But what happens would they be put on the node nevertheless, you can see on the example of "The Edge Of Forever" map that I compiled recently for an experimental version of Xash. Turns out, in places that had a couple thousand visible nodes and leaves in the original, there are almost 170 thousand of them with a new tree. And this is the result after all the preliminary optimizations, otherwise it could have been even more, he-he. Yeah, so... For this reason, the tree in QUAKE3 does not put anything on the node and we certainly do need to get into the leaf, mark visible surfaces in it and add them to the rendering list. On the contrary, in QUAKE / QUAKE2 going deep down to the leaf itself is not necessary. Invisible polygon cutoff (we are talking about world polys, separate game objects will be discussed a bit later) is based on two methods: The first method is to use bit-vectors of visibility (so-called PVS - Potential Visible Set). The second method is regular frustum culling which actually got nothing to do with BSP but works just as efficiently, for a certain number of conditions of course. Bottom line: together these two methods provide almost perfect clipping of invisible polygons, drawing a very small visible piece out of the vast world. Let's take a closer look at PVS and how it works.
When FIDO users get drunk
Underlying idea of PVS is to expose the fact that one leaf is visible from another. For BSP alone it’s basically impossible because leaves from completely different branches can be visible at the same time and you will never find a way to identify the pattern for leafs from different branches seeing each other - it simply doesn’t exist. Therefore, the compiler has to puff for us, manually checking the visibility of all leaves from all leaves. Information about visibility in this case is scanty: one Boolean variable with possible values 0 and 1. 0 means that leaf is not visible and 1 means that leaf is visible. It is easy to guess that for each leaf there is a unique set of such Boolean variables the size of the total number of leaves on the map. So a set like this but for all the leaves will take an order of magnitude more space: the number of leaves multiplied by the number of leaves and multiplied by the size of our variable in which we will store information of visibility (0 \ 1). And the number of leaves, as you can easily guess, is determined by map size map and by the compiler, which upon reaching a certain map size, cease to divide the world into leaves and treat resulting node as a leaf. Leaf size vary for different QUAKE. For example, in QUAKE1 leaves are very small. For example I can tell you that the compiler divide standard boxmap in QUAKE1 into as many as four leaves meanwhile in QUAKE3 similar boxmap takes only one leaf. But we digress. Let's estimate the size of our future PVS file. Suppose we have an average map and it has a couple thousand leaves. Would we imagine that the information about the leaf visibility is stored in a variable of char type (1 byte) then the size of visdata for this level would be, no more no less, almost 4 megabytes. That is, much AF. Of course an average modern developer would shrug and pack the final result into zip archive but back in 1995 end users had modest machines, their memory was low and therefore visdata was packed in “more different” ways. The first step in optimizing is about storing data not in bytes, but in bits. It is easy to guess that such approach reduce final result as much as 8 times and what's typical AF – does it without any resource-intensive algorithms like Huffman trees. Although in exchange, such approach somewhat worsened code usability and readability. Why am I writing this? Due to many developers’ lack of understanding for conditions in code like this:
Actually, this condition implement simple, beautiful and elegant access to the desired bit in the array (as one can recall, addressing less than one byte is impossible and you can only work with them via bit operations)
Titans that keep the globe spinning
The visible part of the world is cut off in the same fashion: we find the current leaf where the player is located (in QUAKE this is implemented by the Mod_PointInLeaf function) then we get a pointer to visdata for the current leaf (for our convenience, it is linked directly to the leaf in the form of "compressed_vis" pointer) and then stupidly go through all the leaves and branches of the map and check them for being visible from our leaf (this can be seen in the R_MarkLeaves function). As long as some leaves turn out to be visible from the current leaf we assign them a unique number from "r_visframecount" sequence which increases by one every frame. Thus, we emphasize that this leaf is visible when we build the current frame. In the next frame, "r_framecount" is incremented by one and all the leaves are considered invisible again. As one can understand, this is much more convenient and much faster than revisiting all the leaves at the end of each frame and zeroing their "visible" variable. I drew attention to this feature because this mechanism also bothers some and they don’t understand how it works. The R_RecursiveWorldNode function “walk” along leaves and branches marked this way. It cuts off obviously invisible leaves and accumulate a list of surfaces from visible ones. Of course the first check is done for the equivalence of r_visframecount and visframe for the node in question. Then the branch undergoes frustum pyramid check and if this check fails then we don’t climb further along this branch. Having stumbled upon a leaf, we mark all its surfaces visible the same way, assigning the current r_framecount value to the visframe variable (in the future this will help us to determine quickly whether a certain surface is visible in the current frame). Then, using a simple function, we determine which side we are from the plane of our branch (each branch has its own plane, literally called “plane” in the code) and, again, for now, we just take all surfaces linked to this branch and add them to the drawing chain (so-called “texturechain”), although nobody can actually stop us from drawing them immediately, right there, (in QUAKE1 source code one can see both options) having previously checked these surfaces for clipping with the frustum pyramid, or at least having made sure that the surface faces us. In QUAKE, each surface has a special flag SURF_PLANEBACK which help us determine the orientation of the surface. But in QUAKE3 there is no such flag anymore, and clipping of invisible surfaces is not as efficient, sending twice as many surfaces for rendering. However, their total number after performing all the checks is not that great. However, whatever one may say, adding this check to Xash3D raised average FPS almost one and half times in comparison to original Half-Life. This is on the matter whether it is beneficial. But we digress. So after chaining and drawing visible surfaces, we call R_RecursiveWorldNode again but now - for the second of two root branches of BSP tree. Just in case. Because the visible surfaces, too, may well be there. When the recursion ends, the result will either be a whole rendered world, or chains of visible surfaces at least. This is what can actually be sent for rendering with OpenGL or Direct3D, well, if we did not draw our world right in the R_RecursiveWorldNode function of course. Actually this method with minor upgrades successfully used in all three QUAKEs.
A naked man is in a wardrobe because he's waiting for a tram
One of the upgrades is utilization of the so-called areaportals. This is another optimization method coming straight out of QUAKE2. The point of using areaportals is about game logic being able to turn the visibility of an entire sectors on and off at its discretion. Technically, this is achieved as follows: the world is divided into zones similar to the usual partitioning along the BSP tree, however, there can’t be more than 256 of them (later I will explain why) and they are not connected in any way. Regular visibility is determined just like in QUAKE; however, by installing a special “func_areaportal” entity we can force the compiler to split an area in two. This mechanism operates on approximately the same principle as the algorithm of searching for holes in the map, so you won’t deceive the compiler by putting func_areaportal in a bare field - the compiler will simply ignore it. Although if you make areaportal the size of the cross-section of this field (to the skybox in all directions) in spite of everything the zones will be divided. We can observe this technique in Half-Life 2 where an attempt to return to old places (with cheats for example) shows us disconnected areaportals and a brief transition through the void from one zone to another. Actually, this mechanism helped Half-Life 2 simulate large spaces successfully and still use BSP level structure (I have already said that BSP, its visibility check algorithm to be precise, is not very suitable for open spaces). So installed areaportal forcibly breaks one zone into two, and the rest of the zoneization is at the discretion of the compiler, which at the same time makes sure not to exceed 256 zones limit, so their sizes can be completely different. Well, I repeat, it depends on the overall size of the map. Our areaportal is connected to some door dividing these two zones. When the door is closed - it turns areaportal off and the zones are separated from each other. Therefore, if the player is not in the cut off zone, then rendering it is not worth it. In QUAKE, we’d have to do a bunch of checks and it’s possible that we could only cut off a fraction of the number of polygons (after all, the door itself is not an obstacle for either visibility check, or even more so, nor it is for frustum). Compare to case in point: one command is issued - and the whole room is excluded from visibility. “Not bad,” you’d say, “but how would the renderer find out? After all, we performed all our operations on the server and the client does not know anything about it.” And here we go back to the question why there can’t be more than 256 zones. The point is, information about all of zone visibility is, likewise, packaged in bit flags (like PVS) and transmitted to the client in a network message. Dividing 256 bits by 8 makes 32 bytes, which generally isn’t that much. In addition, the tail of this information can be cut off at ease if it contains zeroes only. Though the payback for such an optimization would appear as an extra byte that will have to be transmitted over the network to indicate the actual size of the message about the visibility of our zones. But, in general, this approach justified.
Light_environment traces enter from the back
Source Engine turned out to have a terrible bug which makes the whole areaportal thing nearly meaningless. Numerous problems arise because of it: water breaks down into segments that pop in, well, you should be familiar with all this by now. Areaportal cuts the geometry unpredictably, like an ordinary secant plane, but its whole point is being predictable! Whereas areaportal brushes in Source Engine have absolutely no priority in splitting the map. It should be like this: first, the tree is cut the regular way. And when no suitable planes left, the final secant plane of areaportal is used. This is the only way to cut the sectors correctly.
The second optimization method, as I said, is increased size of the final leaf akin to QUAKE3. It is believed that a video card would draw a certain amount of polygons much faster than the CPU would check whether they are visible. This come from the very concept of visibility check: if visibility check takes longer than direct rendering, then well, to hell with this check. The controversy of this approach is determined by a wide range of video cards present at the hands of the end users, and it is strongly determined by the surging fashion for laptops and netbooks in which a video card is a very conditional and very weak concept (don’t even consider its claimed Shader Model 3 support). Therefore, for desktop gaming machines it would be more efficient to draw more at a time, but for weak video cards of laptops traditional culling will remain more reliable. Even if it is such a simple culling as I described earlier.
Decompression sickness simulator
Although I should also mention the principles of frustum culling, perhaps they are incomprehensible to some. Cutoff by frustum pyramid is actually pure mathematics without any compiler calculations. From the current direction of the player’s gaze, a clipping pyramid is built (the tip of the pyramid – in case someone can’t understand - is oriented towards the player’s point of view and its base is oriented in the direction of player’s view). The angle between the walls of the pyramid can be sharp or blunt - as you probably guessed already, it depends on the player's FOV. In addition, the player can forcefully pull the far wall of the pyramid closer to himself (yes, this is the notorious “MaxRange” parameter in the “worldspawn” menu of the map editor). Of course, OpenGL also builds a similar pyramid for its internal needs when it takes information from the projection matrix but we’re talking local pyramid now. The finished pyramid consists of 4-6 planes (QUAKE uses only 4 planes and trusts OpenGL to independently cut far and near polygons, but if you write your own renderer and intend to support mirrors and portals you will definitely need all six planes). Well, the frustum test itself is an elementary check for a presence of AA-box (AABB, Axis Aligned Bounding Box) in the frustum pyramid. Or speaking more correctly, this is a check for their intersection. Let me remind you that each branch has its own dimensions (a fragment of secant plane bound by neighboring perpendicular secant planes) which are checked for intersection. But unfortunately the frustum test has one fundamental drawback - it cannot cut what is directly in the player’s view. We can adjust the cutoff distance, we can even make that “ear feint” like they do in QFusion where final zFar value is calculated in each frame before rendering and then taken into account in entity clipping, but after all, whatever they say, the value itself was obtained from PVS-information. Therefore, neither of two methods can replace the other but they just complement each other. This should be remembered.
I gotta lay off the pills I'm taking
It seems that we figured out the rendering of the world and now we are moving on smoothly to cutting off moving objects... which are all the visible objects in the world! Even ones that, at te first glance, stand still and aren’t planning to move anywhere. Cause the player moves! From one point he still sees a certain static object, and from another point, of course, he no longer does. This detail should also be considered. Actually, at the beginning of this article I already spoke in detail about an algorithm of objects’ visibility check: first we find the visible leaf for the player, then we find the visible leaf for the entity and then we check by visdata whether they see each other. I, too, would like to clarify (if someone suddenly does not understand) how each moving entity is given the number of its current visible leaf, i.e. directly for entity’s its own current position, and the leaves themselves are of course static and always in the same place.
Ostrich is such an OP problem solver
So the method described above has two potential problems: The first problem is that even if A equals B, then, oddly enough, B is far from being always equal A. In other words, entity A can see entity B, but this does not mean that entity B see entity A, and, no, it’s not about one of them “looking” away. So why is this happening? Most often for two reasons: The first reason is that one of the entities’ ORIGIN sit tight inside the wall and the Mod_PointInLeaf function for it points to the outer “zero” leaf from which EVERYTHING is visible (haven’t any of you ever flown around the map?). Meanwhile, no leaf inside the map can see outer leaf - these two features actually explain an interesting fact of an entire world geometry becoming visible and on the contrary, all objects disappearing when you fly outside the map. In regular mode, similar problems can occur for objects attached to the wall or recessed into the wall. For example, sometimes the sounds of a pressed button or opening door disappear because its current position went beyond the world borders. This phenomenon is fought by interchanging objects A and B or by obtaining alternative points for the position of an object, but all the same, it’s all not very reliable.
But lawyer said that you don't exist
In addition, as I said, there is another problem. It come from the fact that not every entity fits a single leaf. Only the player is so small that he can always be found in one leaf only (well, in the most extreme case - in two leaves on the border of water and air. This phenomenon is fought with various hacks btw), but some giant hentacle or on the contrary, an elevator made as a door entity, can easily occupy 30-40 leaves at a time. An attempt to check one leaf (for example, one where the center of the model is) will inevitably lead to a deplorable result: as soon as the center of an object will be out of the player’s visibility range, the entire object will disappear completely. The most common case is the notorious func_door used as an elevator. There is one in QUAKE on the E1M1. Observe: it travels halfway and then its ORIGIN is outside the map and therefore it must disappear from the player’s field of view. However, it does not go anywhere, right? Let us see in greater detail how this is done. The simplest idea that comes to one’s mind: since the object occupies several leaves, we have to save them all somewhere in the structure of an object in the code and check them one by one. If at least one of these leaves is visible, then the whole object is visible (for example, it’s very tip). This is exactly what was implemented in QUAKE: a static array for 16 leaves and a simple recursive function SV_FindTouchedLeafs that looks for all the leaves in range hardcoded in "pev->absmins" and "pev->absmax" variables (pev i.e. a Pointer to EntVars_t table). absmins and absmax are recalculated each time SV_LinkEdict (or its more specific case of UTIL_SetOrigin) is called. Hence the quite logical conclusion that a simple change of ORIGIN without recalculating its visible leaf will take the object out of visibility sooner or later even if, surprisingly enough, it’s right in front of the player and the player should technically still be able to see it. Inb4 why one have to call UTIL_SetOrigin and wouldn’t it be easier to just assign new value to the "pev->origin" vector without calling this function. It wouldn’t. With this method we can solve both former problems perfectly: we can fight the loss of visibility if the object's ORIGIN went beyond the world borders and level the difference of visibility for A->B versus visibility for B->A.
Hey /lifeisstrange, it's been a while since I'd lurked around here or even posted. Things sure are pretty fucked up out there, so why not spend all of our available free time cooped up indoors playing games? Perfect way to kill time these days :) I started LIS2 a few weeks ago, just finished a few days ago, and have been ruminating over it since. So here I am, just sharing my thoughts about an amazing game that certainly isn't without its flaws. Apologies in advance for this reddit version of verbal diarrhea: this is just my putting words to digital paper as it comes to me. First off, I need to state that I really didn't like that Brody just gets an honourable mention at the end of the story. (Though it's worth nothing that he certainly had a coming of age story of his own.) Not even a quick "Thanks for everything" comment on his blog from a Diaz brother? At least for me the entire interaction with Brody changed how I was trying to "raise" Daniel for the rest of the game. If he had such an impact on how I would play the rest of the game as Sean, I would have liked to have had the option to reach out to him, maybe from Claire and Stephen's home in Beaver Creek or from Karen's trailer in Away. I really feel like Brody was my-Daniel's catalyst for One thing I loved about LIS2 is that you have to live with your choices. All of them. There's no going back. It made the game more stressful, and made me more conscious about what I wanted to say. With Max in LIS1, most choices could be rewound to choose the other options. I usually went first with the "bad" options I wouldn't normally have chosen, just to see their outcomes. Then I'd rewind until I was ready to go with the options I really wanted to take. This time in LIS2 there's only one path of choices you can make, and you have to live with it. I liked the mechanic in LIS1 as it was fun, but I appreciated more the permanent nature of choices in LIS2. And speaking of choices, I know that a lot of people didn't like Sean. They call(ed) him whiny and a pathetic attempt at portraying a teenager at 16. I actually think that it's a pretty reasonable depiction of a 16 year old in the given situation. What were you doing at 16? Were you on the lam, all while trying to raise a 9/10 year old younger sibling? While I agree that Sean's character was indeed whiny, it was well suited because he's stuck between a rock and a hard place: he has to make decisions for himself and Daniel that would keep them alive and moving. It's one thing to be a latchkey kid and raise your younger sibling with a stocked fridge and a roof over your head. It's an entirely different thing to be doing that without that stocked fridge and roof over your head, and with law enforcement hunting you down. I felt like Dontnod wrote Sean's character as best as they could. Despite his whiny nature -because, again, 16 year olds want freedom and not responsibilities - he was just completely unsure of everything he did. He only knew that he had to take care of Daniel - Daniel was his only focus and every decision had to be made around that. But Sean sure as shit didn't know what was really best, he just kind of winged it and prayed it would work later on. Guess he learned pretty quickly in Ep4 that prayer isn't everything. Daniel is no different. I feel like Dontnod kind of dropped the ball on Daniel a bit by making him a bit too wide-eyed the entire season. I fully get that he would be this way in Episodes 1 and 2, but by Episode 3 I would have thought Daniel would have sunken into a form of depression. Children that age are emotional sponges. Yeah, the Humboldt County crew were a pretty awesome ragtag family ("friends are the family you choose"), and they seemingly took in and took care of Daniel as one of their own. But by this point in time Daniel should have recognized there's no going back to the life he used to have; I would have surmised that depression would have affected him somewhat by now. I guess Dontnod didn't want to make things any darker for the little 'un than already was? So that brings me to what I think is the strongest point of the game: it's a coming of age story. It's not Ferris Buller or The Breakfast Club, but it's Dontnod's take on how one poor sod has to grow up, quickly, and take care of his wolf pack of two. I think they did a good job here, even though everything felt so far-fetched. But isn't that exactly what makes Season 2 so great? Despite the primary supernatural element that is never explained; the near comically over-the-top depiction of a religious commune; the biggest private hospital room I've ever seen to date, with a TV, and all for a yet-to-be-charged criminal?!; and the awkward "look, we want to have overtly racist people in this game, but we don't want to actually make them racist, so we'll just imply everything, OK?" characters; all of Sean's choices lead up to "What Will Daniel Do?" at the very end of Ep5 Wolves. It doesn't matter what happened to Daniel in the game, what is more important is what Sean tells Daniel after each of these occurrences; Sean's words are everything in this game. The Morality mechanic seemed a little too binary at times and I wish there was more grey area allowed, but I feel that it still worked out well in the end. Children don't quite understand the greyer areas of the world at large as much as grownups do, so I guess that's why the choices couldn't be so in the game. Now looking back at the characters, I feel conflicted about them as a whole. I felt like the written dialogue in LIS2 was significantly better than LIS1, but the dialogue was delivered/performed better in LIS1. Does that make sense? The characters were deeper and richer in LIS2, it's just that they didn't deliver their lines as well. Or maybe it was the editing of the dialogue that made it feel more harsh, less natural. Esteban got such little time in the game, but he was the totem that which (my) Sean based all of his decisions on. And Karen was easily my favourite supporting character: she starts off being the devil herself (because she isn't there to defend herself), only for us to find out that she's anything but. She's just broken, just like everybody else the Wolf Brothers meet along their journey, and trying to atone for it. There's something a bit more natural about the characters, how everybody faces some sort of struggle of their own. Again, Dontnod made each struggle too surreal since that's really the only way to fit them in a story arc that only lasts a few hours each, but that they made it such that their struggles are believable is quite the achievement. Joey in the beginning of Episode 4 is a good example of that fake-but-real kind of internal struggle. He likes Sean and even states on a few occasions that he doesn't believe Sean actually killed that cop (more correctly, I think he says that he thinks Sean doesn't have it in him to do whatever it is the police are saying he did). But he also has a job to do, and laws to follow. He also has a life of his own. Joey can't just let Sean go free, that would effectively cause him to be the one who gets locked up. But he wants to help Sean because in his gut he feels it's the right thing to do. Naturally I snuck out without whacking Joey, because I felt like Joey doesn't deserve any consequences of the ensuing jailbreak. Same for the letter to Karen before leaving Away; I addressed it to "Mom" because I felt like her character was trying to be a mom the only way she knew how. She was not a mother in any traditional sense of the word; there's no forgiving the fact that she left the family; she is a person who realized that she would have made the world worse for her young ones if she stayed behind. Was that the right decision? I certainly can't say if it was or wasn't, but I do feel like her character was written to be one who thought she was doing the right thing by leaving, as the alternative was worse. And she implied as much at the motel and in Away. That's why I addressed her as "Mom" because after seeing what she was willing to do, I recognize that she's doing what she thinks a mother should do. It's really weird that she thinks that's what motherhood is, but that's just how she's been wired (and written). And speaking of motherhood, man alive I was so happy to finally see David in Away. I chose to sacrifice Arcadia Bay in LIS1 so of course the David I met in Away lost Joyce, but gained Chloe. When I first started walking idly towards the silver trailer, I spotted that oh-so-recognizable painting leaning against it. I yelped aloud and willed Sean to run faster in the game, only to find that David wasn't around (yet) (dammit). I really liked that they wrote in David's character as having made amends with Chloe. Unfortunately, it took the deaths of hundreds of people to do so, but it was nice to see that he's understood that life isn't all about following the rules. The picture of our favourite Partners in Crime and Time in his trailer was a really nice nod to LIS1, and the phone conversation between David and Chloe was an especially nice touch. And when Sean was getting ready to leave, it was nice to see some of that tough love from David again. He's a hardass, but he cares. Only now he finally knows how to show that he cares. (I did some Googling to find out what David would be like if I chose to sacrifice Chloe. Turns out he gets divorced from Joyce. What the hell? He basically loses everything? Man, tough luck.) I actually found myself spending more time roaming around the environment in LIS2 than I did with LIS1. Maybe Dontnot learned their lesson from LIS1 that people love to roam around and look under every rock. I basically did just that in LIS2, and really enjoyed it. The environments were aurally and aesthetically richer than in LIS1, but subtly so. I also felt that the side conversations with characters were also a bit less contrived. However, I think that's due to the nature of not being in and around high school characters this time around. It must be difficult to write "teenager" dialogue. One thing that I need to compare between LIS1 and LIS2 is the way they handled the "moments". Max would sit and just take in her surroundings whereas Sean would draw. I totally see how these are actually personality traits (Max is an introvert through and through, and a shutterbug, her eyes are the camera; Sean is an artist and an extrovert introvert, pen and paper are his lens), and appreciate them both. I just loved those moments in LIS1 when Max could just sit down and enjoy the scene. The music, the cuts to different angles, all of it. There weren't as many of these kinds of moments in LIS2, even though they had many opportunities with the amazing vistas. Take the canyon landscape at the beginning of Episode 5: a perfect opportunity to just sit and take in the sights and sounds, maybe with Daniel hurling little pebbles over the edge, too. Now that I realize I've written way more than anybody cares to ever read, I'll wrap up with the use of music. Not just that DN brought back Jonathan Morali, but that they brought back tracks from previous games to bring the player back to certain important moments. For example, when we see Chris again, we hear Sufjan Steven's track lightly playing in the background, reminding players of our time as Captain Spirit one Saturday morning. Another time is when we're talking to David about his past and the "choices" he had to make, we hear a track from LIS1 behind his dialogue. They are really great throwbacks to previous titles, and the emotions they brought out in us. And emotional I was, when David was talking about how he and Chloe had learned to get along in their own way, with the Max and Chloe theme accompanying his storytelling. OK, one last thing. And this will likely be an unpopular opinion for many, though shared by some. I hope that in the upcoming Tell Me Why title they'll have either either Sean or Daniel be some sort of character in the game. Someone kind of like David was in LIS2. He's a surrogate father figure for a moment, just to remind them that it's going to be hard, but worth it, to do what you have to do. The symbolism of Sean/Daniel showing up is that choices matter, so choose wisely. And for what it's worth, I got the Lyla Redemption ending. Alright that's enough brain dumping for me. Go enjoy the rest of your day :)
As 2020 continues to unravel with people worried about their health, livelihoods, jobs and general way of life, one thing at least can remain constant - Groestlcoin's release schedule. We at the core Groestlcoin team really hope everyone is doing well and coping with what 2020 is throwing at us all. For anything to change in this world, major and seemingly dramatic change and chaos unfortunately needs to first ensue but rest assured everyone will come out of 2020 much stronger people! The Groestlcoin team have been working on a vast amount of new technology during these uncertain periods which we would like to share with you today. Groestlcoin Core 19.1 The full list of changes in Groestlcoin Core 19.1 are too long to list here so we won't bore those who do not want to see every slight change here. For that, please go to https://github.com/Groestlcoin/groestlcoin/blob/2.19.1/doc/release-notes/release-notes-2.19.1.md. Instead we will list a general list of changes here. We recommend upgrading to this version if you are running a full node yourself.
New User Documentation
New and updated RPCs
New Settings Implemented and other settings updated
RPC and configuration options removed or deprecated
Various low-level changes
How to Upgrade?
If you are running an older version, shut it down. Wait until it has completely shut down (which might take a few minutes for older versions), then run the installer.
If you are running an older version, shut it down. Wait until it has completely shut down (which might take a few minutes for older versions), run the dmg and drag Groestlcoin Core to Applications. Users running macOS Catalina need to "right-click" and then choose "Open" to open the Groestlcoin Core .dmg.
https://github.com/Groestlcoin/groestlcoin Zeus GRS iOS Wallet Release Zeus GRS: A mobile Groestlcoin app for Lightning Network Daemon (LND) node operators. To use Zeus, you must have a running Lightning Network Daemon (LND). You must provide Zeus GRS with your node's hostname, port number, and the LND macaroon you choose to use (hex format). If you're running a Unix-based operating system (eg. macOS, Linux) you can run xxd -ps -u -c 1000 /path/to/admin.macaroon to generate your macaroon in hex format.
• Scan LNDconnect functionality • Dark and light theme • Option to lock app with a pin • Open Source • Connect to your node - Zeus GRS let's users connect to their existing Lightning node, allowing them to send, receive and manage their channels. • Multiple Wallets - Zeus GRS allows users to create and control as many wallets as they'd like.
https://github.com/Groestlcoin/zeus HODL GRS iOS Wallet Release HODL GRS connects directly to the Groestlcoin network using SPV mode, and doesn't rely on servers that can be hacked or disabled. HODL GRS utilises AES hardware encryption, app sandboxing, and the latest security features to protect users from malware, browser security holes, and even physical theft. Private keys are stored only in the secure enclave of the user's phone, inaccessible to anyone other than the user. Simplicity and ease-of-use is HODL GRS's core design principle. A simple recovery phrase (which we call a Backup Recovery Key) is all that is needed to restore the user's wallet if they ever lose or replace their device. HODL GRS is deterministic, which means the user's balance and transaction history can be recovered just from the backup recovery key.
• Simplified payment verification for fast mobile performance • No server to get hacked or go down • Single backup phrase that works forever • Private keys never leave your device • Import password protected paper wallets • Payment protocol payee identity certification • Apple Watch support This application is licensed under MIT. There is no warranty and no party shall be made liable to you for damages. If you lose coins due to this app, no compensation will be given. Use this app solely at your own risk.
• Multi-currency, Supporting more than 20 FIAT currencies.
Multi-language, Supporting more than 20 languages.
Export TXHEX - You can get your transaction HEX (TXHEX) without broadcasting it, and only do it with the relay of your choice.
Be in control - On your Groestlcoin wallet your private keys never leave your device.
Multiple wallets support - This wallet aims to support the highest wallet standards. Currently supporting HD, HD Segwit, HD BECH32, Native SegWit, Legacy single-address, SegWit single-address, and this app allows you to have as many wallets as you need on a single app instance.
HD wallets - The Hierarchical Deterministic (HD) key creation and transfer protocol (BIP32), which allows creating child keys from parent keys in a hierarchy. The HD Wallets will generate you different public keys for each transactions.
• SegWit - SegWit enabled by default • Full encryption - On top of the phone multi-layer encryption, GRS Bluewallet can encrypt everything with an added password. Biometric security (touch ID, Face ID) is not safe, so you will have an additional password to encrypt your wallet instead. • Plausible deniability - A custom made feature thinking about your personal security. GRS Bluewallet allows you to define a different password which will decrypt a fake wallet set up. For any situation you are forced to disclose your access or when you don't want or you can't show your real wallet. • Open Source under the MIT License. • Watch-only wallets - Watch-only wallets allows you to keep an eye on your cold storage without touching your private key. Easily import your address or xpub and watch it from your app without ever touching it. • Lightning Wallets - Wallets with support for the Lightning Network Protocol. Unfairly cheap and fast transactions. You can send, receive and refill your wallets. • Bump and Cancel transactions - Ability to bump and cancel sent transactions with "Replace-by-fee" (RBF) and ability to Bump received transactions with "Child-pays-for-parent" (CPFP) on Native Segwit wallets (bech32/BIP84). • Plug-in your Groestlcoin Full node new - Ability to plug-in your own Groestlcoin Full node through Electrum Personal Server (EPS), ElectrumX or Electrs. Don't trust, verify for a maximum sovereignty. This application is under MIT license. There is no warranty and no party shall be made liable to you for damages. If you lose coins due to this app, no compensation will be given. Use this app solely at your own risk.
https://github.com/Groestlcoin/bluewallet GRS Lightning Wallet Released GRS Lightning: An easy-to-use cross-platform Groestlcoin Lightning wallet GRS lightning leverages Neutrino to give users a lightweight option to control their own funds, as opposed to running a full node or trusting a third party to play custodian. Features • A User Experience for Everyone • Fully Non-Custodial with LND • Powered by Neutrino and Autopilot • Open Source
This is still early technology and there's a risk of losing all of your funds. We recommend not putting in more money than you are willing to lose. Using the same mnemonic seed between installations or device is not recommended. Keep the app open till its fully synced, this will take a WHILE.
• Provides a fully functional wallet interface, allowing you to send and receive funds across the Lightning Network with ease. • The user interface is responsive and will adapt to fit any web enabled desktop, tablet or mobile device. • You can search the Lightning Network graph, manage peer connections and open & close channels with ease. • The plugin has QR support, enabling basic encoding & decoding of QR codes. • GRS LND For WP also adds a number of WordPress 'short codes', allowing you to embed LND functionality directly in your website pages and posts.
GRS LND For WP can be installed directly from WordPress. Simply navigate to the 'Plugins -> Add New' page and search for 'GRS LND For WP'. You can also view GRS LND For WP on the WordPress.org Plugin Directory To install the plugin manually using source code from this repository: Download the latest plugin release from this repository. Browse to the 'Plugins -> Add New' page of your WordPress admin panel. Click the 'Upload Plugin' button, select 'Browse' and choose the release .zip that you downloaded in step 1. Press 'Install Now'. On the next screen, press the 'Activate' button to turn on the plugin. You're done. You should now see the 'GRS LND For WP' link on your WP admin navigation menu.
https://github.com/Groestlcoin/grs-lnd-for-wp GRS Unstoppable Wallet - Android MainNet and TestNet Unstoppable GRS is open source non-custodial fully decentralised wallet. The engineering process behind this wallet is radically driven by libertarian principles. Exclusive control over what is yours.
• Control your crypto - Unstoppable GRS is a non-custodial wallet. The private keys never leave your phone. • Keep your crypto safe - When you enable the lock code on your phone's operating system, no one will be able to access your wallet funds even if your phone is stolen or lost. In case of a device loss, Unstoppable GRS makes it easy to restore your wallet on another device. • Be independently unstoppable - Unstoppable GRS was engineered to remain online and fully-functional indefinitely. Transfer Groestlcoins regardless of local government regulations. No entity can stop you from sending or receiving crypto or force Unstoppable GRS to stop working. Finally, you have a secure crypto wallet to spend Groestlcoin, and send & receive crypto. • Stay private - With Unstoppable GRS you are connecting directly to decentralised blockchains without any restrictions or intermediaries. Only you can see your assets. There are no accounts, emails, phone numbers, identity checks, or third-party servers storing any private data. This application is under MIT license. There is no warranty and no party shall be made liable to you for damages. If you lose coins due to this app, no compensation will be given. Use this app solely at your own risk.
https://github.com/Groestlcoin/unstoppable-wallet-android Groestlcoin Esplora Block Explorer Released (Mainnet and Testnet!) Groestlcoin Esplora is an open-source Groestlcoin blockchain explorer. This JSON over RESTful API provides you with a convenient, powerful and simple way to read data from the Groestlcoin network and build your own services with it.
• Explore blocks, transactions and addresses • Support for Segwit and Bech32 addresses • Shows previous output and spending transaction details • Quick-search for txid, address, block hash or height by navigating to /<query> • Advanced view with script hex/assembly, witness data, outpoints and more • Mobile-ready responsive design • Translated to 17 languages • Light and dark themes • Noscript support • Transaction broadcast support • QR scanner • API support Groestlcoin Esplora is licensed under MIT. There is no warranty and no party shall be made liable to you for damages. If you lose coins due to Esplora, no compensation will be given. Use Groestlcoin Esplora solely at your own risk.
• Custom HD key derivation added • Added Esplora support Live version available at https://www.groestlcoin.org/webwallet. But it is recommended to download the webwallet offline and run it on your pc. Open index.html to get started. The built-in wallet can be used with any (non) existing mail address and any password. This application is licensed under MIT. There is no warranty and no party shall be made liable to you for damages. If you lose coins due to this app, no compensation will be given. Use this app solely at your own risk.
Remember the mail address and password you used otherwise you will lose your funds.
https://github.com/groestlcoin/webwallet Groestlcoin LND Updated to v0.10 The Lightning Network Daemon (LND) is a complete implementation of a Lightning Network node. Lnd has several pluggable back-end chain services including grsd (a full-node), groestlcoind, and neutrino (a new experimental light client). The project's codebase uses the grssuite set of Groestlcoin libraries, and also exports a large set of isolated re-usable Lightning Network related libraries within it.
• Macaroon Bakery • Multi-Path Payments • Weakness Addressed by MPP • Single-Shot MPP Payments by Default • Custom Onion-Tunneled TLV Payment Metadata Records • New Payment Type: keysend • First-Class Rebalancing via Circular Payments to Self • Local balance check • Privacy Enhancement • Validate Sorted Uncompressed Short Channel IDs • Add payment_secret to BOLT 11 Payment Requests • Cross-Implementation Protocol Compatibility Fixes • Decoupled Min HTLC Settings • Option Upfront Shutdown Support • Sweep Small Outputs • Autopilot External Score Trigger • Channel Fitness Tracking • Pathfinding Improvements • Deeper Feature Bit Inspection • Updates to Default gRPC Settings • Uniform lncli Hex-Encoding • Updates to QueryRoutes • New RPC Calls • Default unsafe-disconnect Setting and Deprecation • Peer to Peer Gossip • Invoice Handling • Channel State Machine • On-Chain Contract Handling • Architectural Changes • Multi-Path Payments Sending Support • Payment tracking • Lifted Invoice Limit • PSBT Funding • Anchor commitment format • Watchtowers tor support
https://github.com/Groestlcoin/lnd/ Groestlcoin Eclair Updated to v 0.3.3.0 Groestlcoin Eclair (French for Lightning) is a Scala implementation of the Lightning Network. It can run with or without a GUI, and a JSON API is also available. Groestlcoin Eclair requires Groestlcoin Core 2.17.1 or higher. If you are upgrading an existing wallet, you need to create a new address and send all your funds to that address. Groestlcoin Eclair needs a synchronised, segwit-ready, zeromq-enabled, wallet-enabled, non-pruning, tx-indexing Groestlcoin Core node. Groestlcoin Eclair will use any GRS it finds in the Groestlcoin Core wallet to fund any channels you choose to open. Eclair will return GRS from closed channels to this wallet. You can configure your Groestlcoin Node to use either p2sh-segwit addresses or BECH32 addresses, Groestlcoin Eclair is compatible with both modes.
• Multipart payments • Trampoline Routing Preview This application is licensed under Apache. There is no warranty and no party shall be made liable to you for damages. If you lose coins due to this app, no compensation will be given. Use this app solely at your own risk. Groestlcoin Eclair is developed in Scala, a powerful functional language that runs on the JVM, and is packaged as a JAR (Java Archive) file. We provide 2 different packages, which internally use the same core libraries:
eclair-node (headless application that you can run on servers and desktops, and control from the command line)
https://github.com/Groestlcoin/eclai Groestlcoin C-Lightning Updated to v0.8.2 C-lightning: A specification compliant Lightning Network implementation in C. C-lightning is a lightweight, highly customisable and standard compliant implementation of the Lightning Network protocol.
• We now support gifting mgro to the peer when opening a channel, via push_msat, providing a brand new way to lose money!
Invoice routehints can be overridden using exposeprivatechannels:
• Wallet withdraw transactions now set nLocktime, making them blend in more with other wallets.
• Preliminary support for plugins hooks which can replace the default groestlcoin-cli with other blockchain querying methods (API may change in future releases though!). • listforwards now records the outgoing short_channel_id, even if it wasn't possible to start forwarding. • Plugins can set additional feature bits, for more experimentation. • More than one plugin can register for the htlc_accepted hook: others will become multi-user in future releases. • Prevent a case where grossly unbalanced channels could become unusable. • New config option --large-channels (also known as 'wumbo') which enables opening channels of any size. (Note that your peer must also support large channels.) • This release includes a keysend plugin, which will enable receiving 'keysend' payments, as first introduced by Lightning Labs. Note that the included keysend plugin is receive only for this release. Nodes which do not want the hassle of spontaneous unrequested payments should add 'disable-plugin=keysend' to their config! • We'll now announce multiple connection endpoints for a single 'type', e.g. multiple IPv4 addresses. • Big performance improvement in the pay command (~1s speedup on average). • c-lightning nodes can now participate in creating larger channels (with the --large-channel config option). • We now wait until the first payment through a channel before updating the feerate; this should help with some spurious closures at channel open or re-connect that were occurring against older versions of other implementations. • A new command getsharedsecret for getting the BOLT-compliant shared secret finding for a node and a point. • Facilities for building rendezvous compatible onions has been added to the onion devtool. • Plugin options will now respect the type they were given in the manifest. • Fixes with plugin cleanups and hangs. • Python2 has been removed as a dependence.
https://github.com/Groestlcoin/lightning Groestlcoin SparkWallet Updated to v0.2.14 Groestlcoin Spark Lightning Wallet Android: A minimalistic wallet GUI for c-lightning in Android. Groestlcoin Spark is currently oriented for technically advanced users and is not an all-in-one package, but rather a "remote control" interface for a c-lightning node that has to be managed separately.
• Fix bug with missing channel reserve • Fix channels view • Detect if the "base directory" is provided and default to the Groestlcoin mainnet network subdirectory within in. • Don't display unconfirmed onchain balance • Fix: Some QR codes not read properly in the web QR scanner • Fix: Resolve TLS issues with NodeJS 10 • Electron: Update to v8 • Fix bug in automatic credentials generation • Fix Android crashes caused by plugin-local-notifications • Cordova Android: Allow connecting to server in cleartext This application is licensed under MIT. There is no warranty and no party shall be made liable to you for damages. If you lose coins due to this app, no compensation will be given. Use this app solely at your own risk.
How does one touch binary options work? Below is a quick 4 step one touch options example of how you can increase your returns. Let’s say you’ve been following EURUSD, and the current price for the currency pair is set at 1.1300. With your broker of choice, you chose to trade one touch options and the timeframe. For the sake of this example In this section, we will be looking at One-touch options, how they work, and whether they are an instrument worth considering. As stated, the most popular binary options are call and put trades, which allow the trader to profit from a correct prediction on whether the price of an underlying asset will go up or down. Technical analysis will work well to show where the price has been, as well as what the potential is for future movement. Combine these two and you can generate some nice profits from Touch and No Touch style binary options trades. One Touch Method. For example, with Touch, you must seek out an asset that is highly likely to be on the move once One Touch Binary Option. There is a minimum withdrawal threshold, but it is only $10 if you are using a credit or debit card. If you need a refresher on the basics of binary options, check out our 101 guide One-Touch Binary Options. A ‘One Touch’ Binary Option, is an option that is hugely popular among binary option traders. Double One Touch Binary Options. Double one touch as the name suggests is a binary option trading type in which trader sets two touch points. If the value of underlying asset hits either of the determined points investor will receive predetermined pay-off. This type of strategy is similar to the one touch binary options, except two trigger
Note: follow the rules, do not be arrogant, discipline, focus and believe in your own ability. then you can become a successful Binary Options Trading and Forex Gokil Om Jindul : http://bit.ly/1b2e0Ls -- In this video, One Touch options are explained including what makes them unique from traditional binary options. A One Touch Option ... Best Binary Options Strategy 2020 - 2 Minute Strategy LIVE TRAINING! - Duration: 43:42. BLW Online Trading 138,796 views. 43:42. How To Start Forex Trading For Beginners (2020) - Duration: 40:16. Are binary options a good idea? If you're thinking about trading binary options, watch this video first. Let's go through the truth about binary options. Binary Options Trading is a style of trading based around a single question, will a stock, commodity, or a forex pair end up above or below a certain price point within a certain period of time.