Tumgik
#fact of the day: c++ had no standard implementation for getting all files in a directory until 17 lmaoooo
neverheardnothing · 4 years
Text
hello. over my tumblr break a few months ago i got bored enough to count the frequencies of every word in joe iconis songs and i don’t know what else to do with this info so here y’all go. every word in joe iconis songs with 10+ occurrences in descending order but i also factored out the 250 most common words in the english language badly (warning, this is a 484 item long list).
1 ('i', 2375) 2 ("i'm", 708) 3 ("it's", 366) 4 ('oh', 312) 5 ('gonna', 204) 6 ('yeah', 175) 7 ('got', 163) 8 ("you're", 160) 9 ('am', 160) 10 ('na', 149) 11 ('wanna', 130) 12 ('hey', 129) 13 ('love', 127) 14 ('feel', 122) 15 ("i'll", 117) 16 ('things', 111) 17 ('really', 97) 18 ("i'd", 97) 19 ('bang', 93) 20 ('girl', 87) 21 ('ever', 86) 22 ("can't", 84) 23 ('better', 78) 24 ("there's", 78) 25 ("she's", 78) 26 ("i've", 75) 27 ('whoa', 74) 28 ('baby', 73) 29 ("that's", 70) 30 ('gotta', 69) 31 ("we're", 68) 32 ("ain't", 63) 33 ("won't", 63) 34 ('cool', 61) 35 ('song', 58) 36 ('away', 57) 37 ('anymore', 56) 38 ('because', 55) 39 ('ya', 55) 40 ('always', 54) 41 ('remember', 54) 42 ('gone', 51) 43 ('guess', 51) 44 ('person', 51) 45 ('woman', 49) 46 ('sing', 49) 47 ('blood', 49) 48 ('stay', 48) 49 ('care', 48) 50 ('into', 47) 51 ('mind', 46) 52 ('whatever', 46) 53 ('jeremy', 45) 54 ("didn't", 45) 55 ('wish', 45) 56 ('feeling', 43) 57 ('ooh', 43) 58 ('cuz', 43) 59 ("you'll", 43) 60 ('ribbit', 43) 61 ('everything', 42) 62 ("you've", 42) 63 ('done', 41) 64 ('nothing', 41) 65 ('fine', 41) 66 ('girls', 41) 67 ('something', 40) 68 ('please', 40) 69 ('hate', 40) 70 ('walk', 40) 71 ('nanana', 40) 72 ('going', 39) 73 ('maybe', 39) 74 ('hear', 39) 75 ('used', 39) 76 ('fall', 39) 77 ('wrong', 38) 78 ('fire', 37) 79 ('rather', 37) 80 ('around', 36) 81 ('shit', 36) 82 ('ba', 36) 83 ('heart', 36) 84 ('leave', 36) 85 ('ah', 36) 86 ('being', 36) 87 ('oh-h', 36) 88 ("he's", 35) 89 ('till', 35) 90 ('fight', 35) 91 ('face', 35) 92 ('la-la-la', 35) 93 ('da', 35) 94 ("let's", 34) 95 ('sure', 34) 96 ('hope', 34) 97 ('nana', 34) 98 ('guy', 33) 99 ('knew', 33) 100 ('wanted', 33) 101 ('shoot', 33) 102 ('em', 32) 103 ('rich', 32) 104 ('together', 32) 105 ('looking', 32) 106 ('myself', 32) 107 ('knock', 32) 108 ('chop', 32) 109 ('makes', 31) 110 ('since', 31) 111 ('whiskey', 31) 112 ('uh', 30) 113 ("they're", 30) 114 ('gimme', 30) 115 ('rick', 30) 116 ('somebody', 29) 117 ('those', 29) 118 ("doesn't", 29) 119 ('red', 29) 120 ('totally', 29) 121 ('hell', 29) 122 ("what's", 29) 123 ('hall', 29) 124 ('die', 29) 125 ('annie', 29) 126 ('bad', 28) 127 ('stop', 28) 128 ("you'd", 28) 129 ('listen', 28) 130 ('mamma', 28) 131 ('la-la-la-la', 28) 132 ('do-do', 28) 133 ('tonight', 27) 134 ('okay', 27) 135 ('hair', 27) 136 ("c'mon", 27) 137 ('roll', 27) 138 ('without', 27) 139 ('michael', 27) 140 ('christine', 27) 141 ('bounty', 27) 142 ('almost', 26) 143 ('another', 26) 144 ('kinda', 26) 145 ('mine', 26) 146 ('rock', 26) 147 ('records', 26) 148 ('music', 26) 149 ('lonely', 25) 150 ('words', 25) 151 ('heard', 25) 152 ('yo', 25) 153 ('madeline', 25) 154 ('says', 25) 155 ('band', 25) 156 ('lots', 25) 157 ('alive', 25) 158 ('god', 24) 159 ('times', 24) 160 ('battle', 24) 161 ('skin', 24) 162 ('dada', 24) 163 ('revolution', 24) 164 ('institution', 24) 165 ('broke', 23) 166 ('talk', 23) 167 ('eyes', 23) 168 ("who's", 23) 169 ('hold', 23) 170 ('burned', 23) 171 ('morning', 23) 172 ('chill', 23) 173 ('pretty', 23) 174 ('car', 23) 175 ('young', 23) 176 ('la-la', 23) 177 ('tired', 23) 178 ('nation', 23) 179 ('friend', 22) 180 ('everybody', 22) 181 ('rehearsal', 22) 182 ('true', 22) 183 ('inside', 22) 184 ('squip', 22) 185 ('ready', 21) 186 ('best', 21) 187 ('understand', 21) 188 ('else', 21) 189 ('lot', 21) 190 ('party', 21) 191 ('ignore', 21) 192 ('bit', 21) 193 ('cut', 21) 194 ('gets', 20) 195 ('sometimes', 20) 196 ("isn't", 20) 197 ('whole', 20) 198 ("everybody's", 20) 199 ('starts', 20) 200 ('feels', 20) 201 ('everyone', 20) 202 ('room', 20) 203 ('dry', 20) 204 ('nice', 20) 205 ('juvie', 20) 206 ('sleep', 20) 207 ('wonder', 20) 208 ('size', 20) 209 ('ass', 20) 210 ('welcome', 20) 211 ('seen', 19) 212 ('weird', 19) 213 ('soon', 19) 214 ('yourself', 19) 215 ('alone', 19) 216 ('flame', 19) 217 ('taking', 19) 218 ('friends', 19) 219 ('enough', 19) 220 ('born', 19) 221 ('lordy', 19) 222 ('hunter', 19) 223 ('relate', 19) 224 ('yai', 19) 225 ('today', 18) 226 ('loser', 18) 227 ('bitch', 18) 228 ('until', 18) 229 ('arm', 18) 230 ('comes', 18) 231 ('dead', 18) 232 ('told', 18) 233 ('ow', 18) 234 ('honey', 18) 235 ('years', 18) 236 ('whack', 18) 237 ("we'll", 18) 238 ('n', 18) 239 ('nerd', 18) 240 ('fell', 18) 241 ('dad', 17) 242 ('pants', 17) 243 ('huh', 17) 244 ('nobody', 17) 245 ('mad', 17) 246 ('getting', 17) 247 ("wasn't", 17) 248 ('scared', 17) 249 ('wait', 17) 250 ('body', 17) 251 ('quite', 17) 252 ('hands', 17) 253 ('ohh', 17) 254 ('hurt', 17) 255 ('deserve', 17) 256 ('ride', 17) 257 ('game', 17) 258 ('survive', 17) 259 ('upgrade', 17) 260 ('free', 17) 261 ('certain', 17) 262 ('wha-oh', 17) 263 ('cigarettes', 17) 264 ('writer', 17) 265 ('bands', 17) 266 ('hunters', 17) 267 ('hang', 16) 268 ("wouldn't", 16) 269 ('age', 16) 270 ('cry', 16) 271 ('sky', 16) 272 ('past', 16) 273 ('behind', 16) 274 ('someone', 16) 275 ('saying', 16) 276 ('black', 16) 277 ('job', 16) 278 ('la', 16) 279 ('fix', 16) 280 ('alright', 16) 281 ('shot', 16) 282 ('bar', 16) 283 ('deal', 16) 284 ('respect', 16) 285 ('dog', 16) 286 ('flicks', 16) 287 ('strong', 15) 288 ("haven't", 15) 289 ('glad', 15) 290 ('next', 15) 291 ('escape', 15) 292 ('fun', 15) 293 ('though', 15) 294 ('promise', 15) 295 ('hide', 15) 296 ('ammonia', 15) 297 ('likes', 15) 298 ('thinks', 15) 299 ('yes', 15) 300 ('nurse', 15) 301 ('looks', 15) 302 ('halloween', 15) 303 ('twice', 15) 304 ('bout', 15) 305 ('sight', 15) 306 ('along', 15) 307 ('forget', 15) 308 ('voices', 15) 309 ('bathroom', 15) 310 ('master', 15) 311 ("shiro's", 15) 312 ('badass', 15) 313 ('rosalie', 15) 314 ('setting', 15) 315 ('road', 14) 316 ('dude', 14) 317 ('brain', 14) 318 ('kid', 14) 319 ('drink', 14) 320 ('means', 14) 321 ('guys', 14) 322 ('bed', 14) 323 ('doing', 14) 324 ('breathe', 14) 325 ('happy', 14) 326 ('scream', 14) 327 ('mistakes', 14) 328 ('anything', 14) 329 ('goes', 14) 330 ('blue', 14) 331 ('pretend', 14) 332 ('social', 14) 333 ('shout', 14) 334 ('geek', 14) 335 ('buddy', 14) 336 ("everything's", 14) 337 ("helen's", 14) 338 ('golden', 14) 339 ('movin', 14) 340 ('st', 14) 341 ("anne's", 14) 342 ('finally', 13) 343 ('choose', 13) 344 ('smoke', 13) 345 ('voice', 13) 346 ('terrible', 13) 347 ('happened', 13) 348 ('pass', 13) 349 ('street', 13) 350 ("couldn't", 13) 351 ('wall', 13) 352 ('instead', 13) 353 ('clear', 13) 354 ('tight', 13) 355 ('damn', 13) 356 ('susannah', 13) 357 ('smile', 13) 358 ('waiting', 13) 359 ('ground', 13) 360 ('remind', 13) 361 ('coolness', 13) 362 ('sad', 13) 363 ("things'll", 13) 364 ('brother', 13) 365 ("it'll", 13) 366 ('somewhere', 13) 367 ('veins', 13) 368 ('cat', 12) 369 ('weather', 12) 370 ('children', 12) 371 ("weren't", 12) 372 ('fucking', 12) 373 ('sorry', 12) 374 ('clean', 12) 375 ('pour', 12) 376 ('different', 12) 377 ('singing', 12) 378 ('coming', 12) 379 ('thinking', 12) 380 ('trying', 12) 381 ('sick', 12) 382 ('bone', 12) 383 ('least', 12) 384 ('lisa', 12) 385 ('nothin', 12) 386 ('dear', 12) 387 ('white', 12) 388 ('hot', 12) 389 ('charlie', 12) 390 ('family', 12) 391 ('door', 12) 392 ('korean', 12) 393 ('dodo', 12) 394 ('c-c-c', 12) 395 ('yours', 12) 396 ('c-c-c-come', 12) 397 ('wants', 11) 398 ('bloody', 11) 399 ('called', 11) 400 ('forever', 11) 401 ('sweet', 11) 402 ('soul', 11) 403 ('swear', 11) 404 ('touch', 11) 405 ('easy', 11) 406 ('days', 11) 407 ('stage', 11) 408 ('across', 11) 409 ('woah', 11) 410 ('crazy', 11) 411 ('town', 11) 412 ('dress', 11) 413 ('top', 11) 414 ('loves', 11) 415 ('rage', 11) 416 ('phone', 11) 417 ('super', 11) 418 ('feet', 11) 419 ('mess', 11) 420 ('penny', 11) 421 ('stars', 11) 422 ('supposed', 11) 423 ('miss', 11) 424 ('college', 11) 425 ('hates', 11) 426 ('quit', 11) 427 ('history', 11) 428 ('cage', 11) 429 ('falling', 11) 430 ('mcfly', 11) 431 ("i'mma", 11) 432 ('played', 11) 433 ('touching', 11) 434 ('band-aids', 11) 435 ('fox', 11) 436 ('thank', 11) 437 ('pitiful', 11) 438 ('covered', 11) 439 ('open', 10) 440 ("they'll", 10) 441 ("we've", 10) 442 ('feelings', 10) 443 ('gun', 10) 444 ('living', 10) 445 ('wow', 10) 446 ('book', 10) 447 ('wonderful', 10) 448 ('blame', 10) 449 ('brooke', 10) 450 ('space', 10) 451 ('slow', 10) 452 ('longer', 10) 453 ('naked', 10) 454 ("he'd", 10) 455 ('star', 10) 456 ('shirt', 10) 457 ('looked', 10) 458 ('i’m', 10) 459 ('standing', 10) 460 ('break', 10) 461 ('lame', 10) 462 ('ten', 10) 463 ('york', 10) 464 ('met', 10) 465 ('dreadfuls', 10) 466 ('mountain', 10) 467 ('push', 10) 468 ('two-player', 10) 469 ('war', 10) 470 ('talkin', 10) 471 ('throw', 10) 472 ('normal', 10) 473 ('hat', 10) 474 ('christmas', 10) 475 ('silver', 10) 476 ('freak', 10) 477 ('mom', 10) 478 ('garage', 10) 479 ('become', 10) 480 ('flesh', 10) 481 ('bastard', 10) 482 ('broadway', 10) 483 ('amphibian', 10) 484 ('outlaw', 10)
15 notes · View notes
Text
All You Need to Know about Adobe Target Architect Master Certification
Adobe Target Architect Master Certification Exam Credential one of the vital dreams of programming languages again within the Fifties changed into to create a means to jot down meeting accent ideas in an summary, high-degree method. this may allow the equal code to be used throughout the berserk different equipment architectures of that era and subsequent a long time, requiring handiest a translator assemblage compiler that could seriously change the source cipher into the computer guidelines for the target structure.
 other languages, like primary, would utilize a runtime that supplied an even extra summary view of the basal hardware, yet at the charge of loads of performance. however the era of eight-bit home computer systems is long in the back of us, the subject matter of horrible-platform construction continues to be extremely central these days, whether one talks about laptop, embedded or server construction. Or all of them at the same time.
 The fundamental interpretation of transportable cipher is cipher that is not sure or restricted to a particular hardware belvedere, or subset of structures. This means that there can be no cipher that addresses selected accouterments addresses or registers, or which assumes particular habits of the accouterments. If unavoidable that such ambit are used, these parameters are to be offered as external configuration facts, per target belvedere.
 here, the hal.h attack file can be carried out for every goal belvedere — presenting the really good instructions for that particlular hardware. The file for each and every belvedere will contain these two capabilities. depending on the platform, this file may consume a particular hardware timer for the wait_msec characteristic, and the addr_write function would expend a reminiscence map absolute the borderline registers and everything abroad that should still be obtainable to the software.
 when accumulation the utility for a selected target, one could require that a target parameter be supplied in the Makefile, for example:
 this way the handiest factor essential to compile for a selected target is to put in writing one specialized book for pointed out goal, and to deliver the identify of that target to the Makefile.
 The outdated part is best constructive for naked metallic programming, or identical situations the place one doesn’t accept a hard and fast API to assignment with. youngsters, utility libraries defining a fixed API that abstracts abroad basal accouterments implementation details is now reasonably typical. referred to as a hardware abstraction layer HAL, here s a standard function of operating techniques, even if a small RTOS or a abounding-blown computing device or allotted server OS.
 At its amount, a HAL is well-nigh the equal device as we defined within the old section. The main change actuality that it provides an API that can be used for different software, in its place of the HAL actuality the sole software. Extending the fundamental HAL, we can add aspects such because the dynamic loading of accouterments assist, within the type of drivers. A driver is practically its personal little mini-HAL, which abstracts away the details of the accouterments it offers aid for, whereas enforcing the necessary functions for it to be called with the aid of the greater-stage HAL.
 commonly talking, using an existing or a customized HAL is extremely a good option, even when targeting a single accouterments platform. The issue is that alike back one writes the code for just a single microcontroller, there is a great deal to be pointed out for being capable of seize the cipher and run it on a HAL for one’s computing device computing device with the intention to expend common debugging and code analysis equipment, such because the Valgrind apartment.
 a crucial consideration together with the query of “will my code run?” is “how effective will or not it s?”. here is primarily central for code that includes lots of enteroutput IO operations, or intense calculations or long-running loops. apparently there is no decent explanation why the equal cipher should not run simply as neatly on a sixteen MHz AVR microcontroller as on a . GHz quad-amount cortex-A system-on-chip device.
 practically, although, the implications go a long way beyond bald clock pace. sure, the ARM equipment will obliterate the uncooked performance results of the AVR MCU, however unless one places in the effort to make use of the extra three cores, they’ll simply be sitting there, idly. bold the target platform has a compiler that helps the C++eleven usual, one can consume its built-in multithreading aid within the <cilia> attack, with a basic software that’d seem like this:
 As pointed out, this cipher will assignment with any C+-capable compiler that has the requisite STL functionality. Of path, the major antic right here is that so as to accept multithreading aid like this, one should have a HAL and scheduler that accouterments such functionality. here is the place the use of a real-time OS like FreeRTOS or ChibiOS can save you a lot of hindrance, as they come with all of those points preinstalled.
 if you don t wish to charge an RTOS on an eight-bit MCU, again there’s all the time the alternative to make use of the RTOS API or the bare metallic custom HAL mentioned earlier than, depending on the target. it all is dependent upon just how moveable and scalable the code needs to be.
 The above cortex-A is an out-of-adjustment architecture, which means that the cipher generated by way of the compiler will get reshuffled on the fly through the processor and therefore can be achieved in any adjustment. The faster and more advanced the processor receives which one executes the cipher on, the better the number of talents issues. counting on the target belvedere’s aid, one should use processor atomics to fence directions. For C++eleven these can be present in the <diminutive> attack in the STL.
 This means adding guidance before and after the important instructions which inform the processor that every one of those guidance belong together and may be completed as a distinct unit. This ensures that for example a sixty four-bit integer adding on a -bit processor will assignment as smartly on a -bit processor, besides the fact that the previous needs to do it in assorted steps. devoid of the fencing guidance, an additional instruction could trigger the cost of the long-established sixty four-bit integer to be adapted, corrupting the result.
 notwithstanding mutexes, spinlocks, rw-locks and family were extra usual in the past to deal with such crucial operations, the stream during the last decades has been against lock-free designs, which tend to be more efficient as they work without delay with the processor and have very little overhead, unlike mutexes which require an further accompaniment and set of operations to preserve.
 one of the crucial incredible issues about requisites is that one could have so many of them. here is additionally what happened with working techniques OSes over the a long time, with each of them establishing its own HAL, kernel ABI for drivers and utility-facing API. For writing drivers, one could actualize a so-referred to as glue layer that maps the motive force’s business logic to the kernel’s ABI calls and vice versa. here s essentially what the NDIS wrapper in Linux does, when it allows WiFi chipset drivers that had been in the beginning written for the windows NT kernel to be used under Linux.
 For userland functions, the situation is similar. As every OS presents its own, unique API to software in opposition t, it capacity that one has to in some way blanket this in a glue layer to accomplish it assignment throughout OSes. that you could, of route, try this work your self, making a library that includes particular header information for the requested goal OS back compiling the library or software, a lot as we saw at first of this article.
 it s, although, more convenient is to use some of the myriad of present libraries that provide such functionality. right here GTK+, WxWidgets and Qt have been long-time favorites, with POCO being rather popular for when no graphical person interface is needed.
 counting on the mission, one could get away with running the actual identical code on everything from an ESP or STM MCU, all the means up to a high-end AMD notebook, with nothing more than a recompile required. this is a robust focus of some of my very own tasks, with NymphCast actuality the most fashionable in that regard. right here the client-aspect ambassador makes use of a faraway method name RPC library: NymphRPC.
 considering NymphRPC is written to use the POCO libraries, the former can run on any platform that POCO supports, which is essentially any anchored, computer, or server OS. regrettably, POCO doesn’t as of yet guide FreeRTOS. besides the fact that children, FreeRTOS supports the entire normal multithreading and networking APIs that NymphRPC needs to work, enabling for a FreeRTOS anchorage to be written which maps those APIs the use of a FreeRTOS-particular cement band in POCO.
 after this, an easy recompile is all that’s crucial to make the NymphCast customer software and NymphRPC run on any belvedere it is supported by means of FreeRTOS, allowing one to control a NymphCast server from anything from a pc to a smartphone to an ESP or similar network-enabled MCU, devoid of changes to the amount common sense.
 even though the C++ normal regrettably ignored out on including networking aid in the C++ average, we may still see it in C++. With such performance directly in the language’s commonplace library, accepting C++ supporting compilers for the goal platforms would suggest that you can still rob the equal cipher and compile it for FreeRTOS, home windows, Linux, BSD, MacOS, VxWorks, QNX and the like. All without acute an extra library.
 Of path, the one big barring right here is with GUI programming. Networking as an example has at all times caught pretty near Berkeley-trend sockets, and multithreading is fairly constant across implementations as neatly. but the graphical consumer interfaces throughout OSes and alike between particular person window managers for XServer and Weyland on Linux and BSD are actual different and intensely advanced.
 most snide-platform GUI libraries do not trouble to alike exercise those APIs for that rationale, however as a substitute simply actualize a simple window and render their own widgets on correct of that, every so often approximating the native UI’s seem and think. additionally normal is to abuse whatever thing like HTML and consume that to almost a GUI, with or without adding one’s personal HTML rendering agent and JavaScript runtime.
clearly, GUIs are nonetheless the final frontier back it involves evil-belvedere construction. For everything abroad, we have appear a long approach.
0 notes
Photo
Tumblr media
DRUG AND SEX OFFENDER TREATMENT OFFERED TO INMATES IN THE FEDERAL BUREAU OF PRISONS
By Sean R. Francis, MS
President
Justice Solutions of America, Inc. 
Currently, there are over 154,000 inmates in the custody of the Federal Bureau of Prisons ( BOP). The vast majority of these inmates suffer from at least one mental health diagnosis. To fulfill the primary mission of the BOP, which is the protection of the public, BOP has implemented various mental health programs to assist inmates who struggle with mental health difficulties. This paper will discuss the various treatment options available to inmates who suffer from substance abuse issues and sexual offending issues. This paper will also address the various ways in which forensic psychologists play a vital role in the execution of these programs and treatment of the inmates.
I. Why We Need Drug Abuse Education in the Bureau of Prisons In the early 1970’s President Richard Nixon declared a “War on Drugs.” This declaration ushered in new law enforcement tools, such as mandatory minimum sentencing and “no- knock warrants”, to combat the flood of illegal drugs entering the United States ( Sirin, 2011).  Many believed that this was a measure aimed at poverty stricken drug addicts and offenders, many of whom were black. One of Nixon’s top aids, John Ehrlichman, would admit years later that Nixon viewed black people as an enemy (Sirin, 2011). However, it would not be until the 1980’s and the Regan era that the “War on Drugs” really got ramped up. The United States would embrace an almost hysterical belief on the harms of illegal drugs. This was spearheaded by First Lady Nancy Regan’s “Just Say No” campaign. This resulted in draconian laws, the abolition of parole in the federal system, the federal sentencing guidelines being passed and a zero tolerance policy with regard to drug abusers and suppliers ( Sirin, 2011)
These laws largely and unjustly targeted the black community. The biggest example of this was the disparity between crack cocaine and powder ( Sirin, 2011).  Crack was treated as a substance that was vastly more dangerous and addictive than powder cocaine. Therefore, the law treated crack as 100 times worse than powder cocaine. The problem with this is that crack was cheaper to produce than pure powder cocaine. Thus, it was popular in many poverty stricken black communities while powder cocaine was popular with the white community. 5 grams of crack cocaine would result in a 5 year mandatory minimum. Drug offenders were now serving more time than rapists and murderers. 
The result of such actions was an explosion in the number of federal offenders in the Bureau of Prisons. In 1981 the federal prison population was 26,313 ( BOP.gov). However, by the time President Regan left office the population had grown to 57,762 ( BOP.gov). This is an over 60 percent increase and was largely a result of the “War on Drugs.” 
The next major increase in the federal prison population due to drugs would come during the Clinton years. Clinton would embrace many of the policies of his republican predecessors. He would also reject a proposal to end the disparity between crack and powder cocaine. Clinton would leave office with a federal prison population of 145,125 inmates ( BOP.gov). 
In response to the growing number of drug offenders the Bureau of Prisons started a massive expansion of it’s substance abuse treatment programs during the 1980’s. In 1988 then BOP director Michael Quinlin created the first residential drug abuse treatment program ( RDAP) ( Pelissier, et al, 2001).  Congress also amended 18 USC § 3621 to allow the Bureau of Prisons to grant an offender up to 12 months off of their prison sentence for successful participation in the 500 hour residential drug program ( Pelissier, et al, 2001). Prior to the passage of the First Step Act in 2018 the 500 hour residential drug program was the only program that allowed offenders to get time off of their sentence. All federal offenders must serve 85 percent of their sentence. II. Residential Drug Abuse Treatment Program (A). The residential drug treatment program is an intensive 500-hour substance abuse program ( BOP.gov). It has been established at specific federal institutions throughout the nation so that all security levels may participate. Currently there are 90 RDAP programs at 77 BOP institutions throughout the nation. Participation is voluntary and successful completion may result in up to 18 months being deducted from an inmate’s sentence ( BOP.gov).
Once an inmate has 30 months or less remaining on their sentence, they may submit themselves for placement in RDAP (Ellis, Bussert, 2016).It is not certain that an offender will be accepted into the program and if they are it is not certain they will receive time off their sentence. The inmate must have a verifiable substance abuse issue. Often documents by the inmate’s pre-sentence report (Ellis, Bussert, 2016).
The inmate usually must be recommended to participate in RDAP by their sentencing judge (Ellis, Bussert, 2016). Also, only offenders with certain convictions will qualify for time off of their sentence. Violent offenders, sex offenders and those who have active detainers will not be eligible for the time off. 
Once an offender submits a request for placement in RDAP the first step is to meet with a member of the psychological staff at the inmate’s current institution (Ellis, Bussert, 2016).  The psychologist will review the inmates file, interview the inmate and conduct an evaluation that will result in a recommendation on RDAP placement. Because all inmates want time off their sentence the BOP psychological staff are instrumental in determining who is truly in need of these services and who is simply malingering in attempts to go home sooner. 
Once an inmate is approved for RDAP they are re-designated to an institution with the program and transferred. When the inmate reaches their new institution, they are housed in a unit that is solely dedicated to the RDAP program (Ellis, Bussert, 2016). Only program participants are housed in these units and, while a corrections officer does staff the unit for security, the unit is run by the forensic psychological staff of the RDAP. The psychological staff have offices in the housing units and control every aspect of the unit, creating a treatment milieu (Ellis, Bussert, 2016).
During business hours the inmates will participate in a half day of programing. There are two programs, the AM & PM. Lunchtime being the end of the AM and the beginning of the PM. The treatment program is run by forensic psychologists and interns. The Cognitive Behavioral Therapeutic method is used for RDAP (Ellis, Bussert, 2016). Therapists work directly with offenders 5 days a week. Offenders have a one-on-one therapist assigned to them for individual therapy and assistance (Ellis, Bussert, 2016). They will also participate in process groups, relapse prevention and other groups dealing with substance abuse issues (Ellis, Bussert, 2016).
As an offender gets close to release their one-on-one therapist will work with the offender, their family and U.S. Probation to help the offender smoothly transition from incarceration to society. The therapist remains a resource even once the offender is released. Offenders often remain in contact with their one on one.
(B). Does participation in treatment impact an inmate’s behavior while in prison? Some studies have found that inmates who participate in prison-based drug treatment programs have a 45 percent lower misconduct rate than inmates who are not programing (Welsh, et al, 2007). When dealing with the RDAP program inmates know that misbehavior will not be taken lightly by the therapeutic team. RDAP participants are supposed to hold themselves to a higher standard than other inmates. Misbehavior can result in loss of privileges, loss of time off their sentence if they complete the program or even expulsion from the program. Langan & Pelissier, (2002 ) conducted a study of 600 inmates who completed the federal RDAP program compared to 451 inmates who had not completed the program but had a history of substance abuse.  They found that the inmates who had completed the RDAP program had a “significantly reduction” in overall institutional misconduct. Similar results have been reported in many studies conducted in state prisons (Welsh, et al, 2007).
(C ). Does prison-based drug treatment work? Pelissier, et al, (2001) found that only 12.5 percent of RDAP graduates were re-arrested within the first six months of release. Inmates who participated in drug treatment while in prison were found to be 73 percent less likely to be re-arrested than non-treated inmates ( Pelissier, et al, 2001). Furthermore, it has been found that offenders who complete prison- based drug treatment have a greater chance of successfully completing their post incarceration probation ( Pelissier, et al, 2001). 
This is vital because in today’s world almost all offenders have parole or probation after the completion of their sentence of incarceration. The days of just walking out free and clear are mostly over. It has been estimated that close to 45 percent of all offenders in prisons are now probation and parole violators ( Time.com). Many offenders on probation and parole have terms and conditions that make actions legal for society in general illegal for them. Drinking alcohol and using marijuana is one such example. In fact, substance abuse violations are often pitfalls for such offenders. The fact that most who participate in prison-based drug treatment do not violate their probation or parole is a positive sign and clear proof that these programs are working. 
(D). What role does a forensic psychologist play in the RDAP program? Forensic psychologists are the backbone of the RDAP program. They play a vital role in every step of the inmate’s progression. As was mentioned above, the first step in an inmate’s journey to RDAP is an evaluation by the psychological staff at the inmate’s parent institution. This requires the psychologist to screen the inmate to weed out those who may be malingering in attempts of admission to the program for time off their sentence ( Ellis, & Bussert, 2016).
Forensic psychologists continually evaluate the inmates who are in the RDAP program and their progression. They developed and run the groups, as well as the program itself. The forensic psychologists work with U.S. Probation, the offender and their families to effectuate a smooth transition from incarceration to freedom. 
When an inmate comes to prison they are placed into the custody and care of the correctional officers and the warden. However, when an inmate is placed in the RDAP program the rules are different. Those inmates are not in the care of the corrections officers. They are in the care of the forensic psychologists. Every aspect of the inmate’s life is dictated by security and therapy, including discipline. This is vastly different from most other inmates. 
III. Non- Residential Drug Abuse Treatment The non -residential drug treatment program is a comprehensive 12-week program utilizing Cognitive-Behavioral Therapy ( CBT) in a group setting ( BOP.gov). The program is voluntary and an inmate’s release date is not impacted by their choice to participate or not to participate ( BOP.gov). Generally, this program is for offenders who have short sentences and do not meet the criteria for the more intensive residential drug treatment program (BOP.gov).
However, offenders who have tested positive for drugs while incarcerated may also be recommended to take this program by their unit team. Also, those who will be entering the RDAP program are often required to complete the non-residential drug abuse program prior to their admission if time permits. For offenders in the non-residential program forensic and staff psychologists, as well as interns,  work with offenders on issues such as problem solving, rational thinking and communication skills. 
IV. Residential Sex Offender Treatment Program ( RSOTP) This program is for inmates with a high risk for re-offense and is offered at two separate locations. Participation is completely voluntary. The program consists of residential therapeutic treatment lasting 12-18 months ( Jones, et al, 2006). Much like the RDAP program, an offender must have between 18-30 months remaining on their sentence to be accepted into the program. The offenders also must have a conviction for or history of sexual offending ( Jones, et al, 2006). 
The role of a forensic psychologist in BOP sex offender treatment is significant. Once an offender applies for admission to the RSOTP the forensic psychologist must evaluate the offender to determine if they would be a good fit for the program and if they will be able to benefit from it ( Jones, et al, 2006).  Criteria such as whether an offender has sufficient intellectual ability to participate in psychotherapy and if there is a mental illness that would preclude program participation are considered by the clinician ( Jones, et al, 2006).  Additionally, offenders are evaluated for acceptance of responsibility, prior treatment failure and psychopathy ( Jones, et al, 2006). (B). Does sex offender treatment work? There is some evidence that suggests that sex offender treatment does work. Sexual offenders who have received treatment had only a 9 percent re-arrest rate compared to untreated offenders who had a 12 percent re-arrest rate. Furthermore, studies have shown that CBT therapy was the most effective form of treatment for sexual offenders (Polizzi, et al, 1999). 
More recent studies have supported the finding that sex offender treatment reduces recidivism. Olver, et al ( 2020) found that treatment reduced recidivism among high-risk offenders by as much as 76-81 percent and among medium risk offenders by 65 – 75 percent. Importantly, this study showed that rates of reoffence among those with no treatment was significantly higher than offenders who had been treated ( Olver, et al, 2020). 
(C). Should offenders participate in sex offender treatment? While treatment for sex offenders is often successful at reducing recidivism getting offenders to participate is difficult as they often face a “treatment paradox”. While many sex offenders have a desire to seek treatment and never re-offend. There is a real question of whether the treatment providers have the offender’s best interest in mind. Offenders are often forced to waive all confidentiality which makes treatment providers de facto law enforcement officers and results in offenders facing increased legal jeopardy for their admissions in treatment (Miller, 2010), ( Strecker, 2011). 
Many treatment programs require complete “acceptance of responsibility.” The treatment providers often operate on the assumption that the offenders have committed more crimes than they have been caught for. Therefore, as a measure of treatment progress offenders are often required to complete victims lists. These lists are where an offender can detail for treatment providers crime’s they committed that they have not been caught for. While this may be a well -intentioned treatment method, with the lack of confidentiality it often is nothing more than a trap which results in additional charges for the offenders. This has resulted in attorneys advising clients to refuse to participate in sex offender treatment. Federal judges have even found that clinicians in the BOP sex offender treatment program have pressured offenders to make victims up in order to be seen as “making treatment progress” so they would not be expelled from the program. 
“The Butner Study's sample population consisted of incarcerated individuals participating in a sexual offender treatment program at a federal correctional institution. Tr. at 29. As Rogers testified, the program is “highly coercive.” Id. Unless offenders continue to admit to further sexual crimes, whether or not they actually committed those crimes, the offenders are discharged from the program.” United States v. Johnson, 588 F. Supp. 2d 997, 1006 (S.D. Iowa 2008).
Due to the lack of confidentiality and removal of statutes of limitations on most sex crimes it is hard to conclude that any sex offender should participate in a prison based or community- based sex offender treatment program. V. Non- Residential Sex Offender Treatment Program Inmates who do not have enough time to complete the residential sex offender treatment program or who are not considered “high risk” can still participate in sex offender treatment. Multiple institutions throughout the BOP offer non-residential sex offender treatment. These programs typically take 9-12 months to complete ( bop.gov). Offenders learn skills to understand their past offenses and reduce their chances of relapse. 
Forensic psychologists play an important role in the non-residential sex offender treatment program as well. They must screen the offender to ensure they meet the criteria for the program. This criterion requires the offender to have a sexual offense history and to be willing to participate. The forensic psychologist will also continually evaluate the offender, including a psychosexual evaluation upon admission to then program.   However, many of the concerns mentioned above apply fully to the non-residential program as well. Attorneys typically advise their clients to avoid all prison-based sex offender treatment in my expeirance. 
Conclusion Unfortunately, there are not many prison based therapeutic treatment programs. Prisons, despite being called Departments of Corrections, really do very little to correct the behavior of the offenders they keep. However, some exceptions do exist, and the Bureau of Prison’s drug treatment programs and sexual offender treatment programs are two such examples. These programs and their success are important to the field of forensic psychology because we are a nation whose prisons are bursting at the seams. Therefore, if we can use psychology to develop programing that reduces recidivism, we are not only protecting society, but we may also change the way policy makers look at drug and sexual offenders. As we know, the laws on the books that deal with many of these offenders are old, draconian and make little sense. But we also know that the law follows psychology ( Gomberg, 2018). So, if programs like these can succeed, hopefully, we can see some changes in the laws recognizing what psychology already knows. That these offenders have an illness and can have a productive and law -abiding life with the right treatment.
REFERENCES Ellis, A., & Bussert, T. A. (2016). Residential drug abuse treatment program (RDAP).Criminal         Justice, 30(4), 30-33. Gomberg, L. (2018). Forensic psychology 101 (Ser. Psych 101 series). Springer Publishing Company, LLC. INSERT-MISSING-URL. Jones, N., Pelissier, B., & Klein-Saffran, J. (2006). Predicting Sex Offender Treatment Entry Among Individuals Convicted of Sexual Offense Crimes. Sexual Abuse, 18(1), 83–98. Langan, N., & Pelissier, B. (2002). The effect of drug treatment on inmate misconduct in federal         prisons. Journal of Offender Rehabilitation, 34(2), 21–30. Miller, J. A. (2010). Sex offender civil commitment: the treatment paradox. California Law Review, 98(6), 2093–2093. Olver, M. E., Marshall, L. E., Marshall, W. L., & Nicholaichuk, T. P. (2020). A Long-Term Outcome Assessment of the Effects on Subsequent Reoffense Rates of a Prison-Based CBT /RNR Sex Offender Treatment Program With Strength-Based Elements. Sexual Abuse, 32(2), 127–153.   Pelissier, B., Wallace, S., O'Neil, J. A., & Gaes, G. G. (2001). Federal prison residential drug treatment reduces substance use and arrests after release. The American Journal of Drug and Alcohol Abuse, 27(2), 315–337. Pelissier, B. (2007). Treatment retention in a prison-based residential sex offender treatment program. Sexual Abuse: A Journal of Research and Treatment, 19(4), 333–346. Polizzi, D. M., MacKenzie, D. L., & Hickman, L. J. (1999). What Works in Adult Sex Offender Treatment? A Review of Prison-and Non-Prison-Based Treatment Programs. International Journal of Offender Therapy and Comparative Criminology, 43(3), 357–374. Sirin, C. V. (2011). From nixon's war on drugs to obama's drug policies today: presidential p progress in addressing racial injustices and disparities. Race, Gender & Class, 18(3-4), 8 82–99. Strecker, D. R. (2011). Sex offender treatment in prisons and the self-incrimination privilege: how should courts approach obligatory, un-immunized admissions of guilt and the risk of longer incarceration? St. John's Law Review, 85(4), 1557–1594. https://time.com/5700747/parole-probation-incarceration/ Welsh, W., Mcgrain, P., Salamatin, N., & Zajac, G. (2007). Effects of prison drug treatment on inmate misconduct. Criminal Justice and Behavior, 34(5), 600–615. United States v. Johnson, 588 F. Supp. 2d 997, 1006 (S.D. Iowa 2008) https://www.bop.gov/inmates/custody_and_care/docs/20170914_BOP_National_Program_Catalog.pdf
0 notes
emplate-420 · 4 years
Quote
Launch HN: QuestDB (YC S20) – Fast open source time series database Hey everyone, I’m Vlad and I co-founded QuestDB ( https://questdb.io ) with Nic and Tanc. QuestDB is an open source database for time series, events, and analytical workloads with a primary focus on performance ( https://ift.tt/2DmHzQo ). It started in 2012 when an energy trading company hired me to rebuild their real-time vessel tracking system. Management wanted me to use a well-known XML database that they had just bought a license for. This option would have required to take down production for about a week just to ingest the data. And a week downtime was not an option. With no more money to spend on software, I turned to alternatives such as OpenTSDB but they were not a fit for our data model. There was no solution in sight to deliver the project. Then, I stumbled upon Peter Lawrey’s Java Chronicle library [1]. It loaded the same data in 2 minutes instead of a week using memory-mapped files. Besides the performance aspect, I found it fascinating that such a simple method was solving multiple issues simultaneously: fast write, read can happen even before data is committed to disk, code interacts with memory rather than IO functions, no buffers to copy. Incidentally, this was my first exposure to zero-GC Java. But there were several issues. First, at the time It didn’t look like the library was going to be maintained. Second, it used Java NIO instead of using the OS API directly. This adds overhead since it creates individual objects with sole purpose to hold a memory address for each memory page. Third, although the NIO allocation API was well documented, the release API was not. It was really easy to run out of memory and hard to manage memory page release. I decided to ditch the XML DB and then started to write a custom storage engine in Java, similar to what Java Chronicle did. This engine used memory mapped files, off-heap memory and a custom query system for geospatial time series. Implementing this was a refreshing experience. I learned more in a few weeks than in years on the job. Throughout my career, I mostly worked at large companies where developers are “managed” via itemized tasks sent as tickets. There was no room for creativity or initiative. In fact, it was in one’s best interest to follow the ticket's exact instructions, even if it was complete nonsense. I had just been promoted to a managerial role and regretted it after a week. After so much time hoping for a promotion, I immediately wanted to go back to the technical side. I became obsessed with learning new stuff again, particularly in the high performance space. With some money aside, I left my job and started to work on QuestDB solo. I used Java and a small C layer to interact directly with the OS API without passing through a selector API. Although existing OS API wrappers would have been easier to get started with, the overhead increases complexity and hurts performance. I also wanted the system to be completely GC-free. To do this, I had to build off-heap memory management myself and I could not use off-the-shelf libraries. I had to rewrite many of the standard ones over the years to avoid producing any garbage. As I had my first kid, I had to take contracting gigs to make ends meet over the following 6 years. All the stuff I had been learning boosted my confidence and I started performing well at interviews. This allowed me to get better paying contracts, I could take fewer jobs and free up more time to work on QuestDB while looking after my family. I would do research during the day and implement this into QuestDB at night. I was constantly looking for the next thing, which would take performance closer to the limits of the hardware. A year in, I realised that my initial design was actually flawed and that it had to be thrown away. It had no concept of separation between readers and writers and would thus allow dirty reads. Storage was not guaranteed to be contiguous, and pages could be of various non-64-bit-divisible sizes. It was also very much cache-unfriendly, forcing the use of slow row-based reads instead of fast columnar and vectorized ones.Commits were slow, and as individual column files could be committed independently, they left the data open to corruption. Although this was a setback, I got back to work. I wrote the new engine to allow atomic and durable multi-column commits, provide repeatable read isolation, and for commits to be instantaneous. To do this, I separated transaction files from the data files. This made it possible to commit multiple columns simultaneously as a simple update of the last committed row id. I also made storage dense by removing overlapping memory pages and writing data byte by byte over page edges. This new approach improved query performance. It made it easy to split data across worker threads and to optimise the CPU pipeline with prefetch. It unlocked column-based execution and additional virtual parallelism with SIMD instruction sets [2] thanks to Agner Fog’s Vector Class Library [3]. It made it possible to implement more recent innovations like our own version of Google SwissTable [4]. I published more details when we released a demo server a few weeks ago on ShowHN [5]. This demo is still available to try online with a pre-loaded dataset of 1.6 billion rows [6]. Although it was hard and discouraging at first, this rewrite turned out to be the second best thing that happened to QuestDB. The best thing was that people started to contribute to the project. I am really humbled that Tanc and Nic left our previous employer to build QuestDB. A few months later, former colleagues of mine left their stable low-latency jobs at banks to join us. I take this as a huge responsibility and I don’t want to let these guys down. The amount of work ahead gives me headaches and goosebumps at the same time. QuestDB is deployed in production, including into a large fintech company. We’ve been focusing on building a community to get our first users and gather as much feedback as possible. Thank you for reading this story - I hope it was interesting. I would love to read your feedback on QuestDB and to answer questions. [1] https://ift.tt/11g71v6 [2] https://ift.tt/39KJE6k [3] https://ift.tt/2CUWTH9 [4] https://ift.tt/30TsPDL... [5] https://ift.tt/37TmfQV [6] https://ift.tt/2EiOfCz July 28, 2020 at 09:57AM
https://ift.tt/2EjYre2
0 notes
lodelss · 4 years
Link
We Said We Would See Him in Court and We Did
Several months into the Trump administration, my wife was doing The New York Times crossword puzzle and came across this clue: “Group that told President Trump, ‘We’ll see you in court.’” I’m not generally much use when it comes to the crossword, but on that one I could help. She didn’t really need the assistance of course, as the ACLU is my employer, and she, The New York Times crossword puzzle drafters, and much of the country already knew that it was the ACLU who told the president we’d see him in court.   
In fact, we told him that before he took office. Just days after he improbably won the presidential election, we took out ads in The New York Times and Los Angeles Times telling the president-elect that if he sought to implement some of the programs he had promised on the campaign trail — denying legal access to abortion, implementing restrictive immigration practices, undermining voting rights and more — we would sue. We kept our promise.
https://twitter.com/ACLU/statuses/1144797689030348801
As we mark three years since we put President-elect Trump on notice, we’ve filed over 100 lawsuits, and over 140 other legal actions — Freedom of Information Act requests, administrative complaints, and other legal mechanisms to halt illegal policies — against the president, his administration, or those inspired by his victory to cut back on civil rights and civil liberties. We’ve won many of them, and in the process, protected the rights of millions of people to be treated with dignity and respect for their basic constitutional rights.
IMMIGRANTS’ RIGHTS
In his first week in the Oval Office, President Trump issued an executive order banning immigrants from seven predominantly Muslim countries from entering the U.S. The ACLU quickly responded by filing and winning the first legal challenge to the Muslim ban that very weekend. A federal court held an emergency hearing on a Saturday night, and enjoined its implementation the day after he put the policy in place. When the first ban was declared unconstitutional by the courts, Trump was forced to issue a revised ban. When we and others successfully challenged that revised ban, he issued still a third version. That, too, was struck down by the lower courts, although the Supreme Court upheld it on a 5-4 vote along party lines. But that third ban, while still a Muslim ban, was narrower than the first two, and we continue to challenge its implementation in the courts. 
https://twitter.com/ACLU/statuses/825532347839836161
The lion’s share of our Trump-related work has focused on defending immigrants, because that is where the president has directed his most virulent, egregious and systematic attacks.
Trump has particularly targeted those seeking asylum, and we’ve countered him at every point. His goal is to deter asylum applicants — regardless of the validity of their claims to facing persecution at home. In what is surely the cruelest of his many anti-asylum initiatives, he separated children from their parents, in hopes that this would discourage other families from seeking refuge and safety here. We sued, obtained a ruling barring the practice, and continue to press the administration to reunite the thousands of families it so heartlessly separated. The ACLU has helped reunite more than two thousand families, but we keep discovering more who were separated, and we won’t rest until we’ve reunited them all.
https://twitter.com/ACLU/statuses/1000403709237583872
Trump has also locked up asylum applicants without hearings in which they could show that they pose no flight risk or danger, and therefore should be freed. In our view, the government cannot constitutionally detain people absent a demonstrated reason for doing so, and where an asylum seeker poses neither a flight risk nor a danger, she cannot constitutionally be deprived of her liberty. Here, too, the courts have blocked the administration’s practice thanks to litigation by us and our allies, requiring it to hold hearings and release those who pose no threat.
The Trump administration also sought to change the legal standard in order to make it more difficult to get asylum based on fears of gang violence or domestic abuse in one’s home country. Again, we sued. And again, a federal court blocked the administration from implementing that policy.  
Trump issued an executive order denying asylum to anyone who entered the country other than at an official port of entry — even though the asylum statute provides that asylum is available to all who face persecution at home, regardless of how or where they entered the U.S. The courts blocked that policy, too. He then sought to deny asylum to anyone who has traveled through another country to reach the U.S. and has not applied for and been denied asylum there. Again, the courts declared the policy illegal. The Supreme Court has temporarily stayed that injunction pending the government’s appeal, but our legal challenge continues.
https://twitter.com/ACLU/statuses/1146471278532071424
Trump also sought to deny legal protections to immigrants from countries that either would not take their citizens back, or where conditions were so bad that we had long afforded their nationals temporary protected status, which allowed them to live and work among us. When Trump sought to revoke their status en masse, the ACLU and our allies sued, and obtained injunctions barring the wholesale denial of legal status to over 400,000 people.   
Most recently, Trump sought to expand so-called “expedited removal,” a summary deportation process that short-circuits many of the essential procedural protections generally afforded to immigrants in deportation proceedings. These procedures have long been limited to persons apprehended within 100 miles of the border and within two weeks of illegal entry. Trump wants to expand exponentially the number of people who could be swiftly deported under this process, to include anyone who had entered illegally within the past two years, apprehended anywhere in the nation. Once again, we sued, and a federal judge blocked the initiative as illegal. 
Trump has attempted, virtually since the day he took office, to build a wall at the southern border. He repeatedly asked Congress for funding to build the wall, and repeatedly they refused. He went ahead and ordered the wall built anyway, declaring a fake national emergency and diverting funds appropriated for other purposes. We sued to stop the diversion of funds, and the lower courts ruled the spending illegal. The Supreme Court granted a temporary stay, but the challenge continues with an argument in the U.S. Court of Appeals for the Ninth Circuit on November 12. 
We are not the only ones to see Trump in court. Other groups have successfully challenged his revocation of protection for the approximately 800,000 so-called Dreamers, young undocumented people brought here by their parents, to whom the Obama administration gave deferred action status, allowing them to live, work, and go to school here. And the courts have also blocked Trump’s efforts to expand the definition of persons deportable as “public charges” to encompass immigrants who even briefly fall on hard times and need virtually any sort of government assistance.
In short, judicial review has been critical to protecting the basic human rights of tens of thousands of immigrants throughout this country.  That that’s only the beginning.   
REPRODUCTIVE FREEDOM
As a candidate, Trump promised to overturn Roe v. Wade, the Supreme Court decision protecting abortion access, and in response, seven states have enacted laws banning abortion. We’ve challenged five of the state bans and obtained injunctions against each of them; our ally, the Center for Reproductive Rights, has blocked the other two. The states are appealing, but we will continue to defend this fundamental right. 
https://twitter.com/ACLU/statuses/1189194439765438464
We also successfully blocked the Trump administration’s own ban on abortion. This prohibition was applied selectively to some of the most vulnerable women in this country: undocumented teens held in U.S. custody. When one such teen, detained in Texas, learned that she was pregnant and sought to exercise her constitutional right to an abortion, the Trump administration refused to let her out of its facility to go to the clinic for the procedure. We sued in federal court, and won. We now have a nationwide injunction against the practice. 
And most recently, a federal judge blocked President Trump’s so-called “conscience rule,” which would have allowed  doctors, nurses, and other health care providers nationwide to place their own views over the needs of their patience and refuse to provide health care to which they object on moral or religious grounds.  The court held that the rule was arbitrary and rested on demonstrably false assertions by the administration.
VOTING RIGHTS
The president tried to rig the census, by adding a question about citizenship that would have deterred tens of thousands of immigrants from filling out the census form. The Census Bureau itself objected to the plan, because they knew it would lead to undercounting of people in areas where immigrants live, often urban areas that the administration sees as likely to vote Democratic. The Constitution requires the census to count all people, not just citizens. The undercounting would have translated into fewer representatives in Congress for districts with large immigrant populations, and less federal support for all the people who live there, citizen and noncitizen alike. The initiative’s pre-textual rationale was initially drafted by a Republican gerrymandering specialist who advised in a confidential memo that it would advantage “Republicans and Non-Hispanic Whites.” We sued and won. In June 2019, the Supreme Court affirmed our victory, finding that the administration’s justification for adding the question was pre-textual — or in plain English, a lie. Trump bristled at the defeat, and only after his entire legal team resigned over his direction to find a way to reinstitute the question did he admit defeat and abandon the effort. 
https://twitter.com/ACLU/statuses/1144257601364008960
ENEMY COMBATANTS
Trump vowed to expand the detention of enemy combatants at Guantanamo Bay, although he has not yet dared do so. His administration did lock up a U.S. citizen as an “enemy combatant” in secret in Iraq, without access to a lawyer, without a hearing, and without any criminal charges. The ACLU sued and won. We first obtained an order requiring the administration to give him access to our attorneys. Then, we challenged the legal basis for detaining a U.S. citizen indefinitely without charges, and the government gave in and released him. The Trump administration has not held a U.S. citizen as an “enemy combatant” since. 
LGBTQ RIGHTS
Trump has also declared war on the LGBTQ community.  Here, too, we’ve challenged him every step of the way. He barred transgender people from serving in the military, despite the military’s finding that there was no basis for excluding them. We obtained an injunction against the ban, and forced Trump to water it down, allowing currently enlisted transgender soldiers to remain. But the revised ban still bars entry to new transgender enlistees. That, too, was blocked, but the lower court’s injunction was temporarily stayed by the Supreme Court pending the government’s appeals, which continue.
https://twitter.com/ACLU/statuses/1116794914795216899
The Trump administration also reversed the federal government’s position on whether LGBTQ individuals are protected by federal civil rights law from being fired or otherwise discriminated against because of who they are. We won victories in the federal appeals courts, which ruled that firing someone for being gay or transgender is a form of sex discrimination forbidden by federal law. In October, we argued before the Supreme Court on behalf of a gay man and a transgender woman who had been fired because of who they are. The Trump administration argued the other side.
https://twitter.com/ACLU/statuses/1176181332390621186
In many of these cases, the courts have served their intended purpose: Protecting the vulnerable from abuses directed at them by the president, upholding the rule of law, and stopping arbitrary and cruel treatment of hundreds of thousands of people. We are proud to have led the legal resistance, with full participation of many of our allies in the immigrants’ rights, reproductive rights, and civil rights communities.
OUTSIDE THE COURTS
But we have not limited our response to the courts. We are committed to defending liberty through all available means, and in a democracy, the political process must also be an essential part of that defense. In the wake of President Trump’s election, our membership soared from 400,000 to 1.8 million, and many of our supporters said they wanted not only to join and donate, but also to take action. The ACLU launched People Power, a nationwide mobilization platform that empowers ACLU volunteers to fight for liberty at the local level. Over half a million people have since taken action with us as People Power volunteers — visiting a legislator or town council, participating in a demonstration, or gathering signatures and getting out the vote for ballot initiatives furthering civil liberties, among others. They have encouraged local sheriffs and police chiefs to adopt immigrant-friendly law enforcement policies; advocated for the expansion of voting rights; gathered over 150,000 signatures for Amendment 4 in Florida, which paved the way to re-enfranchise over 1.4 million previously incarcerated people; and showed up at demonstrations at the border and in many cities to protest anti-immigrant policies. Today, People Power volunteers are pressing presidential candidates of all parties to endorse critical civil liberties initiatives, including reducing mass incarceration. Judge Learned Hand, one of the great federal judges of all time, once said that “liberty lies in the hearts of men and women.” We are deploying People Power to nurture that spirit and spread it through direct action. 
https://twitter.com/ACLU/statuses/1011734566514634755
We also engaged in the 2018 midterms in ways that were not possible before. We spent more than $5 million and devoted thousands of hours of volunteer and staff time to the fight in Florida for Amendment 4. We supported similar voter access reform measures in Nevada and Michigan, both of which passed. We supported a successful referendum to end non-unanimous jury verdicts in Louisiana, a Reconstruction era practice that was designed to nullify the votes of Black jurors. And we helped to defeat a transphobic ballot measure in Massachusetts. In key elections, we also did substantial voter education and outreach to ensure that citizens were aware of the civil rights and civil liberties stakes, reminding voters to “Vote like your rights depend on it.”  President Trump’s election posed immediate and wide-ranging threats to civil liberties. The threats have grown, not diminished, over time. But we have been there every step of the way, fighting to defend the civil rights and civil liberties of all. Most of these legal fights are ongoing, and we will almost certainly have to mount new legal challenges to other unlawful, unconstitutional, or un-American policies. For nearly 100 years, the ACLU has steadfastly fought battles large and small, to secure freedoms and advance equality, no matter who occupies the Oval Office. Great challenges may lie ahead, but rest assured that, with your help, we stand ready to fight for a more perfect union.
Published November 8, 2019 at 03:25AM via ACLU https://ift.tt/2WQrMSI
0 notes
ciathyzareposts · 4 years
Text
New Tricks for an Old Z-Machine, Part 3: A Renaissance is Nigh
In 1397, a Byzantine scholar named Manuel Chrysoloras arrived in Florence, Italy. He brought with him knowledge of Greek, along with many ancient manuscripts in Greek and Latin that had been lost to the West in the chaos following the collapse of the Roman Empire. This event is considered by many historians to mark the first stirrings of the Italian Renaissance, and with them the beginning of the epoch of scientific, material, and social Progress which has persisted right up to the present day.
In 1993, an Oxford graduate student named Graham Nelson released a text adventure called Curses that, among other things, functioned as an advertisement for a programming language he called Inform, which targeted Infocom’s old Z-Machine. This event is considered by most of us who have seriously thought about the history of text adventures in the post-Infocom era to mark the first stirrings of the Interactive Fiction Renaissance, and with them the beginning of an interactive-fiction community that remains as artistically vibrant as ever today.
Yes, I can see you rolling your eyes at the foregoing. On one level, it is indeed an unbearably pretentious formulation, this comparing of one of the most earthshaking events in human culture writ large with the activities of a small community of niche enthusiasts. Yet, if we can agree to set aside the differences in scale and importance for the moment, the analogy really is a surprisingly apt one. Like the greater Renaissance in Europe, the Interactive Fiction Renaissance prepared a group of people to begin moving forward again by resurfacing old things that had been presumed lost forever. Taking pride of place among those things, being inextricably bound up with everything that followed, was the Z-Machine, functioning first as a means of running Infocom’s classic games, as we saw in the first article in this series; and then as a means of running new games, as we began to see in the second article and will examine in still more detail today.
As Graham Nelson began to pursue the dream of writing new software to run on Infocom’s old virtual machine, he had no access to the refined tools Infocom had used for that task. Thus he was forced to start from nothing — from what amounted to a bare chunk of (virtual) computing hardware, with no compilers or any other software helpers to aid his efforts. He had to start, in other words, at the bare metal, working in assembly language.
Assembly language is the lowest level at which any computer, whether real or virtual, can be (semi-)practically programmed. Its statements correspond to the individual opcodes of the processor itself, which normally encompass only the most granular of commands: add, subtract, multiply, or divide these numbers together; grab the number from this local register and put it into that memory location; etc. Assembly language is the primordial language which underpins everything, the one which must be utilized first to write the compilers that allow programmers to develop software in less granular, more structured, more human-friendly languages such as C, Pascal, and BASIC.
Already at this level, however, the Z-Machine separates itself from an ordinary computer. Alongside the rudimentary, granular opcodes that are common to any Turing-complete computer, it implements other opcodes that are absurdly baroque. The “read” opcode, for example, does all of the work of accepting a full line of text from the keyboard, then separating out its individual words and “tokenizing” them: i.e., looking them up in a dictionary table stored at a defined location in the virtual machine’s memory and converting them into the codes listed there. Another opcode, “save,” simply orders the interpreter to save the current state of the machine to disk, however it prefers to go about it; ditto the “restore” opcode. These complex and highly specialized opcodes exist because the Z-Machine, while it is indeed a Turing-complete, fully programmable anything machine in the abstract, is nevertheless heavily optimized toward the practical needs of text adventures. Thus an object table meant to represent rooms and things in the world of a game is hard-coded right into its memory map, and there are other single opcodes which encapsulate relatively complex tasks like looking up or changing the properties of an object in the world, or moving one object into another object.
Strictly speaking, none of this is really necessary; the Z-Machine is far more complicated than it needs to be in abstract terms. Infocom could have created a robust virtual machine which implemented only traditional low-level opcodes, building everything else out in the form of software libraries running on said virtual machine. But they had a strong motivation for hard-coding so many of the needs of a text adventure right into the virtual hardware: efficiency. A baroque opcode like “read” meant that all of the many steps and stages which went into accepting the player’s command could take place at the interpreter level, running natively on the host computer. Implementing a virtual machine of any sort was a serious challenge on a 1 MHz 8-bit computer like an Apple II or Commodore 64; Infocom needed every advantage they could get.
By the time of Graham Nelson’s experimentation with the Z-Machine, most of the concerns that had led Infocom to design it in this way had already fallen by the wayside. The average computer of the early 1990s would have been perfectly capable of running text adventures through a simpler and more generic virtual machine where the vagaries of the specific application were implemented in software. Nevertheless, the Z-Machine was the technology Graham had inherited and the one he was determined to utilize. When he began to work on Inform, he tailored it to the assumptions and affordances of the Z-Machine. The result was a high-level programming language with an unusual degree of correspondence to its underlying (virtual) hardware. Most obviously, the earliest versions of Inform couldn’t make games whose total compiled size exceeded 128 K, the limit for the version 3 Z-Machine they targeted. (This figure would be raised to 256 K once Inform began to target the version 4 and 5 Z-Machine.)
Yet this limitation was only the tip of the iceberg. Each function in Inform was limited to a maximum of 15 local variables because that was all that the stack mechanism built into the Z-Machine allowed. Meanwhile only 240 global variables could exist because that was the maximum length of the table of same hard-coded into the Z-Machine’s memory map. Much of Inform came to revolve around the Z-Machine’s similarly hard-coded object table, which was limited to just 255 objects in version 3 of the virtual machine. (This limitation was raised to 65,535 objects in the version 4 and 5 Z-Machine, thereby becoming in practice a non-issue.) Further, each object could have just 32 properties, or states of being — its weight, its open or closed status, its lit or unlit status, etc. — because that was all that was allowed by the Z-Machine’s standard object table. (Starting with version 4 of the Z-Machine, objects could have up to 64 properties.) All of the dynamic data in a game — i.e., data that could change during play, as opposed to static data like code and text strings — had to fit into the first 64 K of the story file, an artifact of the Z-Machine’s implementation of virtual memory, which had allowed it to pack 128 K or more of game into computers with far less physical memory than that. This limitation too was inherited by Inform despite the fact that by the early 1990s the virtual-memory system had become superfluous, a mere phantom limb which Inform nevertheless had to accept as part of the bargain with the past which it had struck.
Indeed, having been confronted with so many undeniable disadvantages arising from the use of the Z-Machine, it’s natural for us to ask what actual advantages accrued from the use of a fifteen-year-old virtual machine designed around the restrictions of long-obsolete computers, as opposed to taking the TADS route of designing a brand new virtual machine better suited to the modern world. One obvious answer is portability. By the early 1990s, several different open-source Z-Machine interpreters already existed, which between them had already been ported to virtually every computing platform in the world with any active user base at all. Any Inform game that Graham Nelson or anyone else chose to write would become instantly playable on all of these computers, whose combined numbers far exceeded those to which Mike Roberts, working virtually alone on TADS, had so far managed to port his interpreter. (The only really robust platform for running TADS games at the time was MS-DOS; even the Macintosh interpreters were dogged by bugs and infelicities. And as for Graham’s favored platform, the British-to-the-core Acorn Archimedes… forget about it.)
In reality, though, Inform’s use of the Z-Machine appealed at least as much to the emotions as to technical or practical considerations. The idea of writing new games to run on Infocom’s old virtual machine had a romantic and symbolic allure that many found all but irresistible. What better place to build a Renaissance than on the very foundations left behind by the storied ancients? Many or most of the people who came to use Inform did so because they wanted to feel like the heirs to Infocom’s legacy. Poor TADS never had a chance against that appeal to naked sentimentality.
Even as Inform was first gaining traction, it was widely known that Infocom had had a programming language of their own for the Z-Machine, which they had called ZIL: the “Zork Implementation Language.” Yet no one outside of Infocom had ever seen any actual ZIL code. How closely did Inform, a language that, like ZIL, was designed around the affordances and constraints of the Z-Machine, resemble its older sibling? It wasn’t until some years after Inform had kick-started the Interactive Fiction Renaissance that enough ZIL code was recovered to give a reasonable basis for comparison. The answer, we now know, is that Inform resembles ZIL not at all in terms of syntax. Indeed, the two make for a fascinating case study in how different minds, working on the same problem and equipped with pretty much the same set of tools for doing so, can arrive at radically different solutions.
As I described in an article long ago, ZIL was essentially a subset of the general-purpose programming language MDL, which was used heavily during the 1970s by the Dynamic Modeling Group at MIT, the cradle from which Infocom sprang. (MDL was itself a variant of LISP, for many years the language of choice among artificial-intelligence researchers.) A bare-bones implementation of the famous brass lantern in Zork I looked like this in ZIL:
<OBJECT LANTERN (LOC LIVING-ROOM) (SYNONYM LAMP LANTERN LIGHT) (ADJECTIVE BRASS) (DESC "brass lantern") (FLAGS TAKEBIT LIGHTBIT) (ACTION LANTERN-F) (FDESC "A battery-powered lantern is on the trophy case.") (LDESC "There is a brass lantern (battery-powered) here.") (SIZE 15)>
Inform has a fairly idiosyncratic syntax, but most resembles C, a language which was initially most popular among Unix systems programmers, but which was becoming by the early 1990s the language of choice for serious software of many stripes running under many different operating systems. The same lantern would look something like this in a bare-bones Inform implementation:
Object -> lantern "brass lantern" with name 'lamp' 'lantern' 'light' 'brass', initial "A battery-powered lantern is on the trophy case.", description "There is a brass lantern (battery-powered) here.", after [; SwitchOn: give self light; StartDaemon(self); SwitchOff: give self ~light; ], size 15, has switchable;
After enough information about ZIL finally emerged to allow comparisons like the above, many Infocom zealots couldn’t help but feel a little disappointed about how poorly Infocom’s language actually fared in contrast to Graham Nelson’s. Having been designed when the gospel of object-oriented programming was still in its infancy, ZIL, while remarkable for embracing object-oriented principles to the extent it does, utilizes them in a slightly sketchy way, via pointers to functions which have to be defined elsewhere in the code. (This is the purpose of the “ACTION LANTERN-F” statement in the ZIL code above — to serve as a pointer to the routine that should run when the player tries to light the lantern.) Inform, on the other hand, allows all of the code and data associated with an object such as the brass lantern to be neatly encapsulated into its description. (The “SwitchOn” and “SwitchOff” statements in the Inform excerpt above explain what should happen when the player tries to light or extinguish the lantern.) A complete implementation of the Zork I lantern in Inform would probably fill a dozen or more lines than what we see above, monitoring the charge of the battery, allowing the player to swap in a new battery, etc. — all neatly organized in one chunk of code. In ZIL, it would be scattered all over the place, wired together via a confusing network of pointers. In terms of readability alone, then, Inform excels in comparison to ZIL.
Most shockingly of all given the the Infocom principals’ strong grounding in computer science, they never developed a standard library for ZIL — i.e., a standardized body of code to take care of the details that most text adventures have in common, such as rooms and compass directions, inventory and light sources, as well as the vagaries of parsing the player’s commands and keeping score. Instead the author of each new game began by cannibalizing some of the code to do these things from whatever previous game was deemed to be most like this latest one. From there, the author simply improvised. The Inform standard library, by contrast, was full-featured, rigorous, and exacting by the time the language reached maturity — in many ways a more impressive achievement than the actual programming language which undergirded it.
Because it was coded so much more efficiently than Infocom’s ad-hoc efforts, this standard library allowed an Inform game to pack notably more content into a given number of kilobytes. The early versions of Curses, for example, were already sprawling games by most standards, yet fit inside the 128 K Z-Machine. Later versions did move to, and eventually all but fill, the version 5 Z-Machine with its 256 K memory map. Still, the final Curses offers vastly more content than anything Infocom ever released, with the possible exception only of Zork Zero (a game which was itself designed for a version 6 Z-Machine that took the ceiling to 512 K). Certainly any comparison of A Mind Forever Voyaging and Trinity — both famously big games with a story-file size pegged to the version 4 and 5 limit of 256 K — to the final version of Curses — story-file size: 253 K — must reveal the last to be an even more complex, even more expansive experience.
So, the Inform development system could hold its head up proudly next to ZIL; in fact, it was so well-thought-through that ZIL would thoroughly disappoint by comparison once hobbyists finally learned more about it. But what of Curses itself, the game with which Inform was so indelibly linked during the first few years of its existence? Was it also up to the Infocom standard?
Before delving into that question in earnest, I should perhaps elaborate a bit on Graham Nelson’s own description of Curses from the previous article.
In the game, then, you play the role of a rather hapless scion of a faded aristocratic family. Aristocratic life not being what it once was, you’ve long since been forced to register the familial mansion with the National Trust and open it up to visitors on the weekends in order to pay the bills. As the game proper begins, your family is about to take a jaunt to Paris, and you’ve come up to the attic — a place in as shabby a state as the rest of the house — to look for a tourist map you just know is lying around up here somewhere.
It's become a matter of pride now not to give up. That tourist map of Paris must be up here somewhere in all this clutter, even if it has been five years since your last trip. And it's your own fault. It looks as if your great-grandfather was the last person to tidy up these lofts...
Attic The attics, full of low beams and awkward angles, begin here in a relatively tidy area which extends north, south and east. The wooden floorboards seem fairly sound, just as well considering how heavy all these teachests are. But the old wiring went years ago, and there's no electric light. A hinged trapdoor in the floor stands open, and light streams in from below.
In the best tradition of shaggy-dog stories, your search for the map turns into an extended adventure through space and time. You just keep finding more and more secret areas and secret things in the attics and the grounds surrounding the house, including a disconcerting number of portals to other times and places. The whole thing eventually comes to revolve around an ancient familial curse reaching back to the time of Stonehenge. If you manage to get to the end of the game — no small feat, believe me! — you can finally lift the curse. And, yes, you can finally find the bloody Paris tourist map.
It’s hard to know where to start or end any discussion of Curses. It’s one of those works that sends one off on many tangents: its technology, its historical importance, its literary worth as a writing exercise or its ludic worth as an exercise in design. Faced with this confusion, we might as well start with what Curses has meant to me.
For Curses is indeed a game which carries a lot of personal importance for me. I first discovered it about four or five years after its original release, when I was working a painfully dull job as a night-shift system administrator — a job which paid not so much for what I did each night as for my just being there if something should go wrong. I had, in other words, copious amounts of free time on my hands. I used some of it playing a bunch of post-Infocom text adventures which I hadn’t previously realized existed. Because they looked — or could be made to look — like just another scrolling terminal window, they suited my purposes perfectly. Thus my memory of many a 1990s classic is bound up with those nights in a deserted data center — with the strange rhythm of being awake when everyone else is asleep, and vice versa.
Of all the games I played during that time, Curses made one of the greatest impressions on me. I was still young enough then to be profoundly impressionable in general, and I found its casual erudition, its willingness to blend science with poetry, mathematics with history, to be absolutely entrancing. Having been a hopeless Anglophile ever since I first heard a Beatles record at circa six years old, I was well-primed to fall in love with Graham Nelson’s dryly ironic and oh-so-English diction. In fact, as I began to write more seriously and extensively myself in the years that followed, I shamelessly co-opted some of his style as my own. I like to think that I’ve become my own writer in the time since that formative period, but some piece of Graham is undoubtedly still hiding out down there somewhere in the mishmash of little ticks and techniques that constitute my writer’s voice.
For all that Curses entranced me, however, I never came close to completing it. At some point I’d get bogged down by its combinatorial explosion of puzzles and places, by its long chains of dependencies where a single missed or misplaced link would lock me out of victory without my realizing it, and I’d drift away to something else. Eventually, I just stopped coming back altogether.
I was therefore curious and maybe even slightly trepiditious to revisit Curses for this article some two decades after I last attempted to play it. How would it hold up? The answer is, better than I feared but somewhat worse than I might have hoped.
The design certainly shows its age. I have less patience than ever today for walking-dead scenarios that are as easy to stumble into as they are here. I wholeheartedly agree with Graham’s own statement that “Curses is by any reasonable standard too hard.”
So far, so expected. But I was somewhat more surprised by my crotchety middle-aged take on the writing. Mind you, some aspects of it still bring a smile to my face; I still can’t resist saying, “It’s a wrench, but I’ll take it,” every time I pick up a wrench in real life, much to my wife’s disgust. (Luckily, as she’d be the first to point out, I’m not much of a handyman, so I don’t tend to pick up too many of them.) In other places, though, what used to strike me as delightful now seems just a little bit too precious for its own good. I can still recognize the influence it had over me and my own writing, but it does feel at times like an influence I’ve ever so slightly outgrown. Today, things like the game’s quotation of the lovely Dorothy Parker poem “Inventory” — “Four be the things I’d been better without: Love, curiosity, freckles, and doubt.” — when you first type the command of the same name can feel just a little bit facile. Curses is constantly making cultural cross-connections like these, but they’re ultimately more clever than they are profound. It’s a game packed with a lot of cultural stuff, but not one with much to really say about any of it. It instead treats its cultural name-dropping as an end unto itself.
Curses strikes me as a young man’s game, in spite of its showy erudition — or perhaps because of it. It was written by a prodigious young man in that wonderful time of life when the whole world of the intellect — all of it — is fresh and new and exciting, when unexpected pathways of intellectual discovery seem to be opening up everywhere one looks. In this light, Emily Short’s description of it as a game about the sheer joy of cultural discovery rings decidedly true. Graham himself recognizes that he could never hope to write a game like it today; thus his wise decision not to return to the well for a sequel.
But to fairly evaluate Curses, we need to understand its place in the timeline of interactive fiction as well as in the life of the man who created it. It’s often billed — not least by myself, in this very article’s introduction — as the game which kicked off the Interactive Fiction Renaissance, the first of a new breed which didn’t have to settle for being the next best thing to more Infocom games. It was the first hobbyist game which could stand proudly shoulder to shoulder with the best works of Infocom in terms of both technical and literary quality.
On the face of it, this is a fair evaluation — which is, after all, the reason I’ve deployed it. Yet the fact remains that Curses‘s mode of production and overall design aesthetic mark it as a distinctly different beast from the best later works of the Renaissance it heralded. While the games of Infocom certainly were an influence on it, they weren’t the only influence. Indeed, their influence was perhaps less marked in reality than one might imagine from the game’s intimate connection to the Z-Machine, or from its borrowing of some fairly superficial aesthetic elements from Infocom, such as the letterboxed literary quotations which were first employed to such good effect by Trinity. While Curses‘s technology and its prose were unquestionably up to the Infocom standard, in spirit it verged on something else entirely.
In the beginning — the very beginning — text adventures were written on big institutional computers by unabashed eggheads for a very small audience of other eggheads. Games of this type were expected to be hard; questions of fairness rarely even entered the conversation. For these games weren’t just designed for single eggheads to play and conquer — they were rather designed for entire teams of same; adventure gaming in these early days was regarded as a group activity. These games were made publicly available while still works-in-progress; their mode of production bore an ironic resemblance to modern attitudes about “software as a service,” as manifested in modern gaming in things like the Steam Early Access program. In fact, these text-adventures-as-a-service tended not to ever really get finished by their designers; they simply stopped growing one day when their designers left the institution where they lived or simply got bored with them. Graham Nelson was exposed to this tradition early on, via his first encounters with the Crowther and Woods Adventure. (Remember his telling reminiscence: “It seemed like something you were exploring, not something you were trying to win.”) When he came to Cambridge in 1987, he was immersed in a sustained late flowering of this design aesthetic, in the form of the text adventures made for the Phoenix mainframe there.
This attitude cut against the one which Infocom had long since come to embrace by the time Graham arrived at Cambridge: the notion that text adventures should be interactive fictions, soluble by any single player of reasonable intelligence in a reasonable amount of time. As the name “interactive fiction” would imply, Infocom adopted a fundamentally literary mode of production: a game was written, went through lots of internal testing to arrive at some consciously complete state, and then and only then was sent out into the world as the final, definitive work. Infocom might release subsequent versions to fix bugs and incongruities that had slipped through testing, just as the text of a book might receive some additional correcting and polishing between print runs, but Infocom’s games were never dramatically expanded or overhauled after their release. Post-Curses, the hobbyist interactive-fiction community would embrace this Infocom model of production almost exclusively. In fact, a game released “before its time,” still riddled with bugs and sketchily written and implemented, would attract the most scathing of rebukes, and could damage the reputation of its author to the point that she would have a hard time getting anyone to even look at a subsequent game.
Yet Curses was anything but an exemplar of this allegedly enlightened interactive-fiction production function. Graham Nelson’s game grew up in public like the institutional games of yore, being expanded and improved in six major stages, with more than two years elapsing from its first release to its last. Betwixt and between them, Graham shared yet more versions on a more private basis, both among his local peer group and among the burgeoning community of Curses superfans on the Internet. As each new version appeared, these armies of players would jump into it to find the new puzzles and give their feedback on what else might be added to or improved, just as an army of MIT students once did every time the people who would eventually found Infocom put up a new build of the PDP-10 Zork. There are, for example, seven separate ways to solve an early puzzle involving the opening of a stubborn medicine bottle in the final version of Curses, most of them the result of player suggestions.
So, Curses should be understood as an ongoing creative effort — almost, one might say, a collaboration between Graham Nelson and his players — that grew as big as it could and then stopped. A scrupulous commitment to fairness just wasn’t ever in the cards, any more than a rigorously pre-planned plot line. In a telling anecdote, Graham once let slip that he was surprised how many people had finished Curses at all over the years. It was designed, like his beloved Crowther and Woods Adventure, to be a place which you came back to again and again, exploring new nooks and crannies as the fancy took you. If you actually wanted to solve the thing… well, you’d probably need to get yourself a group for that. Even the hint system, grudgingly added in one of the later versions, is oblique; many of the hints come from a devil who tells you the exact opposite of what you ought to be doing. And all of the hints are obscure, and you’re only allowed three of them in any given session.
All of which is to say that, even as it heralded a new era in interactive fiction which would prove every bit as exciting as what had come before, Curses became the last great public world implemented as a single-player text adventure. It’s an archetypal Renaissance work, perched happily on the crossroads between past and future. Its shared debt to the institutional tradition that had stamped so much of interactive fiction’s past and to the Infocom approach that would dictate its future is made most explicit in the name of the language which Graham developed alongside the game. As he told us in the previous article in this series, the first syllable of “Inform” does indeed refer to Infocom, but the second syllable reflects the habit among users of the Cambridge Phoenix mainframe of appending the suffix “-form” to the name of any compiler.
Speaking of Inform: Curses also needs to be understood in light of its most obvious practical purpose at the time of its creation. Most new text-adventure creation systems, reaching all the way back to the time of Scott Adams, have been developed alongside the first game to be written using them. As we’ve seen at some length now in this article and the previous one, Inform was no exception. As Graham would add new features to his language, he would finds ways to utilize them in Curses in order to test them out for himself and demonstrate them to the public. So, just as Inform reflects the Z-Machine’s core capabilities, Curses reflects Inform’s — all of them. And because Inform was designed to be a powerful, complete system capable of producing games equal in technical quality to those of Infocom or anyone else, the puzzles which found their way into Curses became dizzying in their sheer multifariousness. Anything ZIL could do, Graham was not so subtly implying, Inform could do as well or better.
Here, then, the Infocom influence on Curses is much more pronounced. You can almost go through the Infocom catalog game by game, looking at the unique new interactive possibilities each release implemented and then finding a demonstration somewhere in Curses of Inform’s ability to do the same thing. Zork II introduced a robot to which the player’s avatar could issue verbal commands, so Curses does the same thing with a robot mouse; Enchanter had an underground maze whose interconnections the player could alter dynamically, so Curses has a hedge maze which let its player do the same thing; Infidel drew hieroglyphic symbols on the screen using groups of ASCII characters, so Curses has to demonstrate the same capability; etc., etc. (One of the few Infocom affordances that doesn’t show up anywhere in Curses is a detailed spell-casting system, the linchpin of the beloved Enchanter trilogy — but never fear, Graham wrote an entirely separate game just to demonstrate Inform’s capabilities in that area.) If all this doesn’t always do much for the game’s internal coherence, so be it: there were other motivations at work.
Graham Nelson’s own story of the first release of Curses is stamped with the unassuming personality of the man. On May 9, 1993, he uploaded it to an FTP site connected with the Gesellschaft für Mathematik und Datenverarbeitung — a research institute in Bonn, Germany, where a friendly system administrator named Volker Blasius had started an archive for all things interactive fiction. He then wrote up a modest announcement, and posted it to the Usenet newsgroup rec.arts.int-fiction — a group originally set up by stuffy academic hypertext enthusiasts of the Eastgate stripe, which had since been rudely invaded and repurposed by unwashed masses of text-adventure enthusiasts. After doing these things, Graham heard…nothing. Feeling a little disappointed, but realizing that he had after all written a game in a genre whose best days seemed to be behind it, he went about his business — only to discover some days later that his incoming Usenet feed was bollixed. When he got it fixed, he found that his little game had in fact prompted a deluge of excitement. No one had ever seen anything like it. Just where had this mysterious new game that somehow ran on Infocom’s own Z-Machine come from? And where on earth had its equally mysterious author gone to after releasing it?
It really is hard to overstate the impact which Curses, and shortly after it Inform, had on the interactive-fiction community of 1993. Text adventures at that time were largely an exercise in nostalgia; even all of the work that had been done to understand the Z-Machine and make new interpreters for it, which had been such a necessary prerequisite for Graham’s own work, had been done strictly to let people play the old games. While some people were still making new games, none of them could comprehensively stand up next to Infocom at their best. Yes, some of them evinced considerable creativity, even a degree of real literary ambition, but these were held back by the limitations of AGT, the most popular text-adventure development system at the time. Meanwhile Adventions, the makers of the most polished games of this period, who were wise enough to use the technically excellent TADS rather than the more ramshackle AGT, were more competent than inspired in churning out slavish homages to Zork. All of the absolute best text adventures, the ones which combined literary excellence and technical quality, were still those of Infocom, and were all more than half a decade old.
And then along came Curses as a bolt out of the blue. Even if we wish to argue that some aspects of it haven’t aged terribly well, we cannot deny how amazing it was in 1993, with its robust determination to do everything Infocom had done and more, with its distinct and confident literary sensibility, and not least — the appeal this held really cannot be emphasized enough — the fact that it ran on Infocom’s own virtual machine. It dominated all online discussion of text adventures throughout the two years Graham spent continuing to improve and expand it in public. The gravitational pull of Curses was such that when Mike Roberts, the creator of TADS, released an epic of his own later in 1993, it went oddly unremarked — this despite the fact that Perdition’s Flames was progressive in many ways that Curses distinctly wasn’t, making it impossible to lock yourself out of victory, prioritizing fairness above all other considerations. It stands today as the better game in mechanical terms at least, recommendable without the caveats that must accompany Graham’s effort. Yet it never stood a chance in 1993 against the allure of Curses.
And so it was that the quiet, thoughtful Englishman Graham Nelson — hardly the most likely leader of a cultural movement — used Curses and Inform to sculpt a new community of creation in his own image.
Graham’s technological choices became the community’s standards to a well-nigh shocking extent. The version 5 Z-Machine, the last and most advanced of its text-only iterations to come out of Infocom, had only been used by a few late Infocom games, none of them hugely beloved. Thus its implementation had tended to be a somewhat low priority among interpreter writers. But when Curses outgrew the 128 K memory space of the version 3 Z-Machine fairly early in its release cycle, and Graham stepped up to the 256 K version 5 Z-Machine, that decision drove interpreter writers to add support for it; after all, any Z-Machine interpreter worth its salt simply had to be able to play Curses, the sensation of the text-adventure world. Thus the version 5 Z-Machine became the new standard for the hobbyist games that followed, thanks not only to its expanded memory space but also to its more advanced typography and presentation options. (Graham would later define two new versions of the Z-Machine for really big games: an experimental and seldom-used version 7 and a version 8 which did come into common use. Both would allow story files of up to 512 K, just like Infocom’s graphical version 6 Z-Machine.)
Graham was utterly disinterested in making money from his projects. He made Inform entirely free, destroying the shareware model of AGT and TADS. David Malmberg, the longtime steward of AGT, stepped down from that role and released that system as well as freeware in 1994, signalling the end of its active development. Mike Roberts did continue to maintain and improve TADS, but soon bowed to the new world order ushered in by Inform and made it free as well. Not coincidentally, the end of the era of shareware text adventures as well as shareware text-adventure development systems coincided with Graham’s arrival on the scene; from now on, people would almost universally release their games for free. It’s also of more than symbolic significance that, unlike earlier hotbeds of text-adventure fandom which had coalesced around private commercial online services such as CompuServe and GEnie, this latest and most enduring community found its home on the free-and-open Internet.
It’s important to note that Graham’s disinterest in making money in no way implied a lack of seriousness. He approached everything he did in interactive fiction with the attitude that it was worth doing, and worth doing well. In the long run, his careful attention to detail and belief in the medium as something worthy of serious effort and serious study left as pronounced a stamp on the culture of interactive fiction as Inform or Curses themselves.
In 1995, he produced “The Z-Machine Standards Document,” which replaced years of speculation, experimentation, and received hacker wisdom with a codified specification for all extant versions of the Z-Machine. At the same that that he worked on that project, he embarked on The Inform Designer’s Manual, which not only explained the nuts and bolts of coding in the language but also delved deep into questions of design. “The Craft of Adventure,” its included essay on the subject, remains to this day the classic work of its type. Working with what was by now an enthusiastic hobbyist community which tempered its nostalgia for the medium’s commercial past with a belief in its possibilities for the present and future, Graham even saw the The Inform Designer’s Manual — all 500-plus pages of it — printed as a physical book, at a time when self-publishing was a much more fraught endeavor than it is today.
But the most amusing tribute to the man’s sheer, well-earned ubiquity may be the way that his personality kept peeking through the cracks of every game made with Inform, unless its author went to truly heroic lengths to prevent it. His wryly ironic standard responses to various commands, as coded into the Inform standard library — “As good-looking as ever” when you examined yourself; “Violence isn’t the answer to this one” when you gave in to frustration and started trying to beat on something; “You are always self-possessed” when you attempted to take yourself — proved damnably difficult to comprehensively stamp out. Thus you’d see such distinctly non-Nelsonian efforts as zombie apocalypses or hardcore erotica suddenly lapsing from time to time into the persona of the bemused Oxford don wandering about behind the scenes, wondering what the heck he’d gotten himself into this time.
Seen with the hindsight of the historian, the necessary prerequisites to an Interactive Fiction Renaissance aren’t hard to identify. The Internet gave text-adventure fans a place to gather and discuss the games of the past, as well as to distribute new ones, all unbeholden to any commercial entity. Free Z-Machine interpreters made it easy to play Infocom’s games, widely recognized as the best of their type ever made, in convenient ways on virtually every computer in existence. Activision’s two Lost Treasures of Infocom collections made the complete Infocom canon easy to acquire, placing all text-adventure fans on an even footing in the course of providing them with their equivalent of The Complete Works of William Shakespeare. And then Graham Nelson came along and gave so much: a superb programming language in Inform, a superb demonstration of where interactive fiction could go in the post-Infocom era in Curses, documentation that exceeded the standard of most professional efforts, and, perhaps most of all, a living example of how interactive fiction was worth taking seriously in all its aspects, worth doing completely and well — and forget worrying about making money out of it. So, my next statement is as cringe-worthy as it is inevitable: Graham Nelson became interactive fiction’s Renaissance Man.
Now, it was just a matter of time before all of these forces forged something rather extraordinary. The year after Graham arrived on the scene in such exciting fashion was actually one of the quietest in the history of text adventures in terms of new releases; AGT was dying, while Inform was just beginning to pick up steam as an entity separate from Curses. But the following year, 1995, would see an embarrassment of worthy releases, large and small, trying all sorts of things, even as the cultural capstone to the new edifice of post-Infocom interactive fiction — an annual Interactive Fiction Competition — arrived to complete the construction process. The events of 1993 had been the harbinger; 1995 would become the true Year One of the Interactive Fiction Renaissance.
(Sources: the book The Inform Designer’s Manual by Graham Nelson; Stephen Granade’s timeline of interactive fiction on Brass Lantern; archives of rec.arts.int-fiction and rec.games.int-fiction, available on the IF Archive. My warmest thanks go once again to Graham Nelson for sharing so much of his story for these articles.
Curses remains available for free. It can of course be played on any Z-Machine interpreter.)
source http://reposts.ciathyza.com/new-tricks-for-an-old-z-machine-part-3-a-renaissance-is-nigh/
0 notes
douchebagbrainwaves · 5 years
Text
THE OTHER HALF OF UNIONS
Which caused yet more revenue growth for Yahoo, and further convinced investors the Internet was as late as Newton's time it included what we now call the Metaphysics came after meta after the Physics in the standard edition of Aristotle's works compiled by Andronicus of Rhodes three centuries later. Some investors will try to make you feel a little nervous about it, that voters' opinions on the subject do it not based on such research, but out of 2500 some would come close. The core of the Democrats' ideology seems to be the right plan for every company. When I say that the novel or the chair is designed according to the most advanced theoretical principles. It's like light from a distant star. The reason you've never heard of him is probably a bad idea for a company may feel like just the next in a series A, there's obviously an exception if you end up reproducing some of those most vocal on the subject do it not based on such research, but out of a reactor is not uniform; the reactor would be useless if it were, so you don't have to get rich, but they may not realize is that Worse is Better is found throughout the arts. And since lots of other people want to help them. The Meander is a river in Turkey.
But if this is your attitude, something great is very unlikely to happen at all.1 Actually this is hard to answer. He adds: I remember the Airbnbs during YC is how intently they listened. Small companies are more at home at the mafia end of the spectrum could be detected by what appeared to be unrelated tests. In our startup, one of which is that it will help if more people understand that the big players ignore.2 When I think how hard can it be, visitors must wonder.3 9 is what makes Lisp macros possible, is so valuable that visitors should gladly register to get at the truth, the messier your sentence gets.4 If a shoe pinches when you put your product in beta.
In other words, it's a sign of trouble.5 Why? By 1998, Yahoo was the beneficiary of a de facto employee of the company. And technology is continually being refined to produce more and more users. Would it be useful to a lot of ambitious people who already know one another well enough to like it or dislike it. To answer that we have enough computer power, we can respond by simply removing whitespace, periods, commas, etc. Our startup spent its entire marketing budget on PR: at a time.6
Another group was worried when they realized they had to pay $5000 for the Netscape Commerce Server, the only leverage you have is statistics, it seems a good trend and I expect this to be benevolent.7 I call the Hail Mary strategy. Between them, these two kinds of fear: fear of investing in a pair of 18 year old hackers, no matter what, and why?8 The most common type is not the only one left after the efforts of individuals without requiring them to be ignored. Sometimes you need an idea now. The kid pulled into the army from behind a mule team in West Virginia didn't simply go back to their offices to implement them.9 This doesn't mean big companies will start to shift back. Just listen to the people who teach the subject in universities. But as long as acquirers remain stupid.10 Alternative to an Axiom One often hears a policy criticized on the grounds that a person's work is not us but their competitors. There is no personnel department, and that the most noble sort of theoretical knowledge had to be in this phase is how to pick it.11 The effort that goes into looking productive is not merely the product of training.
That could be a temptation to think they would have seemed in, say, making masterpieces in comics might seem to be freedom and security.12 From this point, unless you got the money. 4 days he went from impecunious grad student to millionaire PhD.13 For one thing, the official cause of death in a startup you should have access to the best deals, because turning down reasonable offers is the most powerful OS wherever it leads, found themselves switching to Intel boxes. Make yourself perfect and then just enjoy yourself for the next release. The way not to seem desperate is not to spend it doing fake work. I predict that in the future. They each constrain the other.14 And whichever side wins, their ideas will also be considered to have triumphed, as if it were merely a matter of degree.
Get into the habit of so many present ills: specialization. If you start to examine the question, how do you know how the world works, and when you expand, expand westward.15 The replies surprised me. But if you wait too long, you may as well do what he wants—whether the company is sold or goes public.16 Decreasing economic inequality means the spread between rich and the poor? And once you've done it. This is what kills you.17 An essay is supposed to be working on; there's usually a reason.18 In effect the valuation is 20 million. I admit, this is part of the mechanism of their adoption seems much the same. Which explains the astonished stories one always hears about VC inattentiveness. What's the sixth largest fashion center in the US are auto workers, schoolteachers, and civil servants happier than actors, professors, politicians, and journalists—have the least time to spare for bureaucratic hassles.
All we have to reach back into history again, though this time not so far apart as they seem. This is not just a useful illusion. Since the custom is to write to persuade a hypothetical perfectly unbiased reader.19 But they also influence one another both directly and indirectly. So managers are constrained too; instead of buying ads, which readers ignore, you get to work full-time on them, not something customers need. Why the pattern? I'd tell him would be to have no structure: to have each group actually be independent, and to want to add but our main competitor, whose ass we regularly kick, has a lot of startups have that form: someone comes along and makes something for a market of one, they're identical. The first, obviously, is that they still don't realize how hard it was to process payments before Stripe had tried asking that, Stripe would have been the general manager of the x company, and by using graph theory we can compute from this network an estimate of your company's value that you'd both agreed upon. But the first is to tell them everything either.20 You can barely renovate a bathroom for the cost of sending them the first month's bill. Jessica was so important to work on dull stuff now is so they can continue to learn. Siegel, Jeremy J.21
C, in order to avoid this problem, any more than you actually are. I wouldn't try it myself. They act as if they were one person. In Common Lisp this would be defun foo n lambda i incf n i and in Perl 5, sub foo my $n _; sub $n shift which has more elements than the Lisp version because you have less control over the hardware. When investors ask you a question you don't know exist yet. I wonder what's new online. If you try something that has to be powerful enough to enforce a taboo. Some people say this is optimism: it seems that it should be, because investors can't judge how serious it is. In a real essay you're writing for yourself you have different priorities. More or less. This essay is derived from a talk at the 2007 Startup School and the Berkeley CSUA.
Notes
Companies didn't start to rise again. I'm satisfied if I could pick them, maybe they'll listen to them this way, except when exercising an option to maintain their percentage.
Trevor Blackwell, who would in itself deserving. Which is not limited to startups. After reading a talk out loud can expose awkward parts.
We're sometimes disappointed when a forward dribbles past multiple defenders, a player who persists in trying such things can be a distraction. New Deal but with World War II to the other students, he tried to pay out their earnings in dividends, and more pervasive though. But you can send your business plan to make money.
And then of course the source files of all. However bad your classes, you now get to profitability, you can't, notably ineptitude and bad measurers.
But the margins are greater on products.
As a rule, if they knew their friends were.
It's sometimes argued that kids who went to school. Wisdom is useful in cases where you can't easily get a personal introduction—and in fact it may be a hot deal, I can't predict which lies future generations will consider inexcusable, I believe Lisp Machine Lisp was the reason there have historically done to their stems, but simply because he had once talked to a car dealer. 1% a week before. So the most abstract ideas, just harder.
Jessica Livingston's Founders at Work.
Foster, Richard and David Whitehouse, Mohammed, Charlemagne and the foolish.
Few technologies have one. Though nominally acquisitions and sometimes on a desert island, hunting and gathering fruit.
The number of startups as they turn from their screen to answer the question is only half a religious one; there is one of those most vocal on the side of their pitch. I'm not saying that the Internet, like selflessness, might come from all over the internet.
Philadelphia.
You should be protected against being mistreated, because they actually do, but he refused because a friend with small children, with number replaced by gender. The wartime versions were much more drastic and more pervasive though. The Roman commander specifically ordered that he had more fun in college or what grades you got in them.
I don't know yet what they're building takes so long.
For example, the switch in mid-game. Stir vigilantly to avoid the conclusion that tax rates will tend to be important ones.
Patrick Collison wrote At some point, when in fact had its own mind. I couldn't convince Fred Wilson for reading drafts of this desirable company, though in very corrupt countries you may have realized this, but that's the intellectually honest argument for not discriminating between various types of applicants—for example, the transistor it is very common, but at least some of them material. Google Video is badly designed. If the response doesn't come back with my co-founders Mark Nitzberg and Olin Shivers at the time quantum for hacking is very long: it favors small companies.
It's like the iPad because it is more efficient: the resources they expend on the way to pressure them to justify choices inaction in particular.
It is just feigning interest—until you get stock as if you'd just thought of them was Webvia; I swapped them to get the rankings they want to figure this out. Which means one of the class of 2007 came from such schools. The University of Vermont, 1991, p.
Users had been transposed into your head. Unfortunately the constraint probably has a finite market value. I'd say the rate of change in how Stripe felt. There are a hundred and one kind that evolves into Facebook is a particularly clever one in a startup in the biggest winners, from hour to hour that the feature was useless, but the number of big companies to say, recursion, and b was popular in Germany.
The mystery comes mostly from looking for something that doesn't exist.
Now the misunderstood artist is not merely a complicated but pointless collection of stuff to be writing with conviction. Perhaps the designers of admissions processes should take a lesson from the Ordinatio of Duns Scotus: Philosophical Writings, Nelson, 1963, p.
Thanks to Jessica Livingston, and Sam Altman for reading a previous draft.
0 notes
rightsinexile · 5 years
Text
Case Notes
Grace v. Whitaker: Judge blocks US asylum rules for domestic, gang abuse survivors
This piece was written by Nidia Bautista and originally published by Al Jazeera on 19 December 2018. The original article publication can be found here. The opinion in its entirety can be found here.
A US federal judge on Wednesday struck down the policies put in place by former Attorney General Jeff Sessions that made it harder for individuals fleeing domestic and gang violence to obtain asylum.
Last June, Sessions reversed precedent put forth by the Obama administration that allowed more individuals to cite domestic violence and fears of gang violence as part of their asylum application. Sessions argued that "the mere fact that a country may have problems effectively policing certain crimes - such as domestic violence or gang violence - or that certain populations are more likely to be victims of crime cannot itself establish an asylum claim."
On Wednesday, Judge Emmet Sullivan found the policies "arbitrary, capricious and in violation of the immigration law". Sullivan also ordered federal officials to return plaintiffs who were deported and provide them with new credible fear determinations "consistent with the immigration law."
The decision was the result of a lawsuit filed by the American Civil Liberties Union (ACLU) on behalf of a dozen asylum seekers, including children, who had their asylum claims rejected after their "credible fear" screenings by asylum offices, an initial step in the asylum process where officers determine whether asylum-seekers have a well-founded fear of persecution.
Plaintiffs in the ACLU case were placed in removal proceedings without a hearing, according to the ACLU. That included an indigenous woman named Grace, who fled Guatemala after enduring 20 years of rape and beatings by a partner.
Judge Sullivan permanently blocked the government "from continuing to apply those practices and from removing plaintiffs who are currently in the United States without first providing credible fear determinations consistent with the immigration laws."
In the Mexican border city of Tijuana, where thousands of Central American migrants and refugees have been living in different shelters after arriving last month as part of a collective exodus, migrants, including Xiomara Ramirez Hernandez, hope the decision will help their chances of getting asylum in the coming months. Up until today Hernandez, 52, from San Salvador, El Salvador, didn't think she had a good chance of obtaining asylum. She's staying in Tijuana's El Barretal shelter, along with more than 3,000 other migrants and refugees, and has been in the border city for a month now. On October 18 of this year, she was ordered to leave San Salvador within 24 hours by a member of the Barrio 18 gang, a known rival of the MS 13 gang.
The threat came after Hernandez found out her 22-year-old daughter was found beaten and naked in an empty lot in the capital city. She had been attacked, raped and left for dead by a member of Barrio 18. Hernandez approached San Salvador's attorney general's office to report the crime. Within days the perpetrator showed up at her door and gave her an ultimatum: leave the city or stay and be killed.
She joined the migrant caravan to escape the threats. On Wednesday morning, Hernandez was thinking about leaving Tijuana and travelling to Ciudad Juarez to look for work. But Judge Sullivan's decision might mean a change of plans. "If I have the chance to present my asylum claims, I'm going to, absolutely," she told Al Jazeera. "I won't survive back in my country."
Human rights advocates and lawyers slammed Sessions's decision earlier this year as a violation of women's human rights. Gender violence is a widespread issue in Central America and at least 50 percent of women experience domestic violence in the region. UN Women has said that domestic violence represents only the beginning of a number of violent acts that could culminate in femicide. An estimated 387 women were killed in Honduras in 2017 while every 18 hours a woman was killed in El Salvador that year.   The Justice Department said it was still deciding whether it would appeal the decision.
How far reaching is the impact of Grace v. Whitaker?
This case note was written by Jeffrey Chase, and originally published on 24 December 2018. It is reprinted here with the author’s permission, all rights reserved.
Six months after a significant number of US immigration judges cheered a decision intended to revoke the hard-earned right of domestic violence victims to asylum protection, immigration advocates had their chance to cheer last week’s decision of US District Court Judge Emmet G. Sullivan in Grace v. Whitaker. The 107-page decision blocks USCIS from applying the standards set forth in a policy memo to its asylum officers implementing the decision of former Attorney General Jeff Sessions in Matter of A-B-. Judge Sullivan concluded that “it is the will of Congress - and not the whims of the Executive - that determines the standard for expedited removal,” and therefore concluded that the policy changes contained in the USCIS memo were unlawful. In his decision in Matter of A-B-, Sessions stated that “generally, claims … pertaining to domestic violence or gang violence will not qualify for asylum.” In a footnote, Sessions added “accordingly, few such claims would satisfy the legal standard to determine whether an [asylum applicant] has a credible fear of persecution.” Read properly, neither of those statements are binding; they are dicta, reflecting Sessions’ aspirations as to how he would like his decision to be applied in his version of an ideal world. However, both the BIA and the author of the USCIS policy memo forming the basis of the Grace decision drank the Kool-Aid. The BIA almost immediately began dismissing domestic violence cases without the required individualized legal analysis. And USCIS, in its memo to asylum officers, stated that in light of A-B-, “few gang-based or domestic violence claims involving particular social groups defined by the members’ vulnerability to harm may … pass the ‘significant probability’ test in credible fear screenings.” If one reads Matter of A-B- carefully, meaning if one dismisses the more troubling language as non-binding dicta, its only real change to existing law is to vacate the precedent decision in Matter of A-R-C-G- which had recognized victims of domestic violence as refugees based on their particular social group membership. A proper reading of A-B- still allows such cases to be granted, but now means that the whole argument must be reformulated from scratch at each hearing, requiring lengthy, detailed testimony of not only the asylum applicant, but of country experts, sociologists, and others. Legal theories already stipulated to and memorialized in A-R-C-G- must be repeated in each case. Such Sisyphean approach seems ill suited to the current million-case backlog. However, the BIA and the USCIS memo chose to apply Sessions’ dicta as binding case law, an approach that did in fact constitute a change in the existing legal standard. When the Department of Justice argued to the contrary in Grace, Judge Sullivan called shenanigans, as USCIS’s actual application of the decision’s dicta to credible fear determinations harmed asylum applicants in a very “life or death” way. The judge also reminded the DOJ of a few really basic, obvious points that it once knew but seems to have forgotten in recent years, namely (1) that the intent of Congress in enacting our asylum laws was to bring our country into compliance with the 1951 Convention on the Status of Refugees; (2) that the UNHCR’s guidelines for interpreting the 1951 Convention are useful interpretive tools that should be consulted in interpreting our asylum laws, and (3) that UNHCR has always called for an expansive application of “particular social group.” Judge Sullivan further found that as applied by USCIS, the should-be dicta from A-B- constitutes an “arbitrary and capricious” shift in our asylum laws, as it calls for a categorical denial of domestic violence and gang-based claims in place of the fact-based, individualized analysis our asylum law has always required. How far reaching is the Grace decision? We know that the decision is binding on USCIS asylum officers, who actually conduct the credible fear interviews. But is the decision further binding on either immigration judges or judges sitting on the Board of Immigration Appeals? USCIS of course is part of the Department of Homeland Security. Immigration judges and BIA members are employees of EOIR, which is part of the Department of Justice. Its judges are bound by precedent decisions of the Attorney General, whose decisions may only be appealed to the Circuit Courts of Appeal. However, the credible fear process may only be reviewed by the US District Court for the District of Columbia, and only as to whether a written policy directive or procedure issued under the authority of the Attorney General is unconstitutional or otherwise in violation of law. This is how Grace ended up before Judge Sullivan. The BIA and Immigration Judges generally maintain that they are not bound by decisions of district courts. Despite these differences, the credible fear interviews conducted by USCIS are necessarily linked to the immigration court hearings of EOIR. An asylum officer with USCIS recently described the credible fear interview process to me as “pre-screening asylum cases for the immigration judge.” The credible fear process accounts for the fact that that the applicant has not had time yet to consult with a lawyer or gather documents, might be frightened, and likely doesn’t know the legal standard. But the purpose of the credible fear interview is to allow the asylum officer to gather enough information from the applicant to determine if, given the time to fully prepare the claim and the assistance of counsel, there is a significant possibility that the applicant could file a successful claim before the immigration judge. The credible fear standard has always been intended to be a low threshold for those seeking asylum. Before A-B-, a victim of domestic violence was extremely likely to meet such standard. The USCIS memo reversed this, directing asylum officers to categorically deny such claims. But now, pursuant to Grace, USCIS must go back to approving these cases under the pre-A-B- legal standard. When an asylum officer finds that the credible fear standard has not been met, the only review is before an immigration judge in a credible fear review hearing. Although, as stated above, EOIR generally argues that it is not bound by district court decisions, its immigration judges would seem to be bound by the Grace decision in credible fear review hearings. Congress provided the district court the authority to determine that a written policy directive of the AG (which was implemented by the USCIS written policy memo) relating to the credible fear process was in violation of law, and Judge Sullivan did just that. Even were EOIR to determine that the decision applies only to USCIS, the IJ’s role in the credible fear review hearing is to determine if USCIS erred in finding no credible fear. If USCIS is bound by Grace, it would seem that IJs must reverse an asylum officer’s decision that runs contrary to the requirements of Grace. But since the credible fear standard is based entirely on the likelihood of the asylum application being granted in a full hearing before an immigration judge, can EOIR successfully argue that its judges must apply Grace to conclude that yes, a domestic violence claim has a significant chance of being granted at a hearing in which the IJ will ignore the dicta of A-B-, find that the only real impact of the decision was that it vacated A-R-C-G-, and will thus apply an individualized analysis to an expansive interpretation of particular social group (with reference to UNHCR’s guidelines as an interpretive tool)? And then, once the case is actually before the court, ignore Grace, and apply what appears to the be BIA’s present approach of categorically denying such claims? Many immigration judges are presently struggling to understand Matter of A-B-. The decision was issued on the afternoon of the first day of the IJ’s annual training conference. This year’s conference was very short on legal analysis, as the present administration doesn’t view immigration judges as independent and neutral adjudicators. But the judges tapped for the asylum law panel had to throw away the presentation they had spent months planning and instead wing a program on the A-B- decision that they had only first seen the prior afternoon. Needless to say, the training was not very useful in examining the nuances of the decision. As a result, fair-minded judges are honestly unsure at present if they are still able to grant domestic violence claims. Of course, a decision of a circuit court on a direct challenge to A-B- would provide clarification. However, A-B- itself is presently back before the BIA and unlikely to be decided anytime soon. I am aware of only one case involving the issue that has reached the circuit court level, and it is still early in the appeal process. My guess is that EOIR will issue no guidance nor conduct specialized training for its judges on applying A-B- in light of the Grace decision. Nor will the BIA issue a new precedent providing detailed analysis to determine that a domestic violence claimant satisfied all of the requirements set out in A-B- and is thus entitled to asylum. A heartfelt thanks to the team of outstanding attorneys at the ACLU and the Center for Gender and Refugee Studies for their heroic efforts in bringing this successful challenge.
New guidance requires fair process for process for domestic violence, gang asylum claims at the US border
The following was released by the Centre for Gender & Refugee Studies at the University of California-Hastings on 14 January 2019. The guidance for both EOIR and USCIS can be found here.
The Centre for Gender & Refugee Studies (CGRS) is pleased to share the government’s new guidance for asylum cases which clarifies that there is no blanket rule against claims involving applicants fleeing domestic violence and gang violence. The guidance was issued in accordance with US District Court Judge Emmet Sullivan’s recent ruling in Grace v. Whitaker. The Grace lawsuit, filed by CGRS and the American Civil Liberties Union (ACLU) last summer, challenged the implementation of former Attorney General Jeff Sessions’ Matter of A-B- decision in credible fear proceedings. Under US Citizenship and Immigration Services (USCIS) guidance issued after A-B-, many asylum officers and immigration judges had been rejecting survivors of domestic violence and gang violence at this initial screening stage, returning countless asylum seekers to the risk of life-threatening harm. In his December decision Judge Sullivan ruled that key legal interpretations in Matter of A-B- and related USCIS policy memos were arbitrary, capricious, and unlawful. He granted our request for a permanent injunction, blocking asylum officers and immigration judges conducting credible fear interviews and review hearings from implementing them. Notably, Judge Sullivan ruled that each case must be considered on its own facts, and that there can be no general rule against asylum claims based on domestic violence and gang violence in the credible fear screening process. He also rejected the government’s attempt to impose a heightened legal standard in cases involving violence perpetrated by non-government actors, such as intimate partners and members of criminal organizations. The new guidance brings the government into compliance with the Court’s order and requires that asylum officers and immigration judges provide a fair process for asylum seekers in credible fear proceedings, including those presenting themselves at our southern border. CGRS encourages advocates representing individuals at all stages of the asylum process to review the new guidance closely and to contact CGRS for our most up-to-date litigation resources and practice pointers.
Rayamajhi v. Whitaker: No de minimus funds exception to the material support bar
This note is in regards to the Rayamajhi opinion released by the court on 15 January 2019.
The panel dismissed in part and denied in part a petition for review of Board of Immigration Appeals’ denial of asylum and withholding of removal to a citizen of Nepal under the material support terrorist bar. The panel held that petitioner’s argument for a duress exception to the material support bar is foreclosed by Annachamy v. Holder, 733 F.3d 254 (9th Cir. 2013), overruled in part on other grounds by Abdisalan v. Holder, 774 F.3d 517 (9th Cir. 2015) (en banc), and therefore does not constitute a colorable legal or constitutional question providing jurisdiction over the otherwise unreviewable material support determination. The panel held that there is no de minimis funds exception to the material support bar. The panel explained that the plain text of the statute, 8 U.S.C. § 1182(a)(3)(B)(iv)(VI), states that funds knowingly given to a terrorist organization are material support, regardless of the amount given. The panel further held that even if the statute is ambiguous on this point, the Board’s interpretation in In re A-C-M-, 27 I. & N. Dec. 303 (B.I.A. 2018), that there is no de minimis exception, was based on a permissible construction of the statute, and therefore is entitled to Chevron deference.
0 notes
getaether · 6 years
Text
Aether News & Updates - July / August 2018
Hey folks - this is the Aether monthly. This one is pretty exciting with screenshots, and it is probably going to be the last pre-launch update, because it's about around one week to some semblance of code complete. The app is running fine on my machine now, with all features built for the backend and frontend, and most for the client (user interface) side. There's a few screenshots of the current state below. None of those are design mocks, it's the real thing, pulling data from real other nodes. (The data within them is auto-generated for load testing, so there are no users on the system right now.)
Recap: I'm new. What's this thing?
Hey there - welcome in! There was a write-up on that last month, do check it out here.
July updates
Javascript
July, it was mostly about going deep into the Javascript rabbit hole to build the client user interface. (Quick recap: Aether has three parts, client, frontend and backend. Client is UI, frontend is the graph compiler, and the backend is responsible for distribution of underlying entities and communicating with other nodes.)
The tl;dr is that designing and building user interfaces is fun —to me, at least— but doing it in a world where everybody wants you to use Webpack in their own way with slightly different and subtly incompatible configuration is not.
Packing, unpacking and compilers
There is one general rule that is useful in building most things: to do the simplest, dumbest thing that works. I heard it called the Linus' rule, though it is not entirely clear if Torvalds has ever said it. (Nevertheless, the Linux kernel and its success could be considered a manifestation of this idea.)
Go works well for this mindset, and that's why the frontend compiler and the backend is written in it. It's a place where you can actually control the layout of your structs bit by bit in the memory, and it does a great job of getting out of your way, in that the abstraction of Go is fairly thin: you are working with a CPU and a memory that you can put bits into, and there is a small, helpful, and fast garbage collector provided if you want to use it. That means, in Go, you can have as few third party dependencies as possible, and for the things that do not exist in the standard library, you can efficiently build what you want. It also helps that the core language is very small, and there is usually one way to do things. This is very useful when you actually want to read code on Github on how libraries implement a specific functionality. Everybody has to write it mostly the same, so library authors aren't writing what is effectively a completely different language. coughC++cough
With this in mind, I opted for Vue as the base for building the client. This was largely motivated by my desire to avoid Webpack, and this whole seemingly-always-half-broken part of Javascript ecosystem that frequently picks a favourite son every 6 months and kills it with fire shortly afterwards. Considering past experience of projects that had to be maintained that were still on Grunt (remember that?), others still on Gulp, going into this again with Webpack did not sound enticing to the least bit, especially considering that the main selling point of it (single Javascript file) is not very useful in a local Electron environment. To that end, Vue offers an ability to use it through the old-school way, which is to include it as a `````` file. Very simple! Since my use case did not need Webpack, this made Vue a first and only choice. Much later on, and long beyond the point of no return, however, it became obvious that this was a siren song.
The issue was this — Vue considers the ability to split your code into multiple Javascript files (single-page components) an advanced feature. And the moment you hit this, which is early enough that your project is not yet functional, but late enough that you can't really back out, Vue tells you that you have to use Webpack.
This was a huge disappointment, since the majority of the reason for using Vue was to avoid Webpack. It's not that one cannot get it to work, it's just that it does not make sense to integrate with yet another build system, especially in this specific case, one that offers exactly zero benefits. Ostensibly it makes things easier for library developers (since they can offload a decent chunk of 'getting things to work' part of library-building to Webpack, which, promptly, unloads it onto you), but it would have been very nice if Vue had provided a way to use single-file components in a way that does not require a build step, especially considering even Angular-1 in 2014 had a way to do so, in the form of ng-include directive.
(I know that it is not the fastest, since fragments have to be all loaded before the page needs to render, but in this case, the files are already available on the local filesystem, thus it does not matter.)
Nevertheless, it was a good few days spent on getting Vue, Webpack and Typescript working. To be fair, Vue is awesome, Typescript genuinely fixes legitimate problems with Javascript, and Webpack can help under right conditions. It is a good thing that all three exists. It's just that it is a little sad that the Javascript environment is such a high-churn, high-thrash place, and coming from 4-5 months of writing Go code where the language is sound, the libraries, the very few you need, are of excellent quality, and there is only one specification of the language, one gets to realise why the concept of Javascript Fatigue exists.
I do not want to belabour the point. The one last thing that is quite interesting is that the main three Javascript libraries that is used (Vue, Typescript and Webpack) don't even agree on what is syntactically valid Javascript. There are multiple specs of Javascript floating around, and there is Babel, which is a library that compiles new, not-yet-ratified (often, not-yet-finished) Javascript language changes into old, mostly-valid Javascript that you can run. It is a fun tool to test new syntaxes. Unfortunately, it appears that everyone is so sick of old, standard Javascript that it is a dependency on almost anything, so that library authors can write new, shiny Javascript.
Considering the fact that it implements changes that are drafts, and drafts sometimes do change, the tendency to use the newest, cutting edge, unstable Javascript features ends up in a place where the documentation on the Internet is written in Javascript that you don't understand (because it was created literally a week ago), that does not work, that does not compile (because the spec changed two days ago), and most importantly for most intents and purposes, of negative actual value.
If you are a library author, or if you are writing documentation of any form, please consider writing your documentation in good, old, standard Javascript that actually executes without requiring a compiler set up exactly the same way you have on your local machine.
Regardless, it should be obvious where this is going at this point. Fast forward a month of writing Javascript, and there is more than one million lines of Javascript in the node_modules directory, totalling 2+gb of code that there is no actual dependency on, but is there nevertheless due to how node ecosystem and Webpack in general works. Thankfully, the final end result is only a few megabytes of front-end code, since that is the actual code that is needed. The rest is code that Webpack refuses to compile without, but far as one can see, none of it does go into the final end result.
The fundamental goal was to achieve was a client that was simple to write, simple to understand, and simple to maintain. Unfortunately, it appears that this is close to, in the Javascript world, a yet-unsolved problem. There will be some maintenance to do when Webpack goes the way of Grunt and Gulp. Regardless, I'm happy with the client code given the constraints, and it is as simple and maintainable as technically possible. The hope is that it will only improve.
WebAssembly is something to watch, because it does allow one to completely replace Javascript and its dependencies with another language of your choice (which you can do right now), and have it be able to interact with the DOM (which you cannot, and won't be able to for the next few years). Since what matters is the DOM access, whenever that comes, it will be some welcome competition to Javascript and its environment.
August updates
Current status
As of today, the app is about a week to code complete, there are a few pages that need to be designed and implemented (notifications, status, home/popular views), and a few components that need to be finished (post and thread ranking logic based on signals). I'm also getting an Apple Developer Key, so that in OS X, the app will open without scary warnings, and I will be able to push auto-updates in a way similar to Firefox does. (Download update in the background, install on next start, with an ability to disable it).
I will also be putting out a Discourse forum for user support coincidentally, so that there will be community support available.
Integration testing
The expected code complete is the end of this week, but given the prima facie impossibility of giving accurate estimates in building anything of nontrivial complexity (estimating the task takes full knowledge of the task, and in building tech products, the task itself is 90% getting the full knowledge of the task at hand, and only 10% building it), some skepticism is probably warranted.
Regardless, the goal is to have a pre-release executable for OS X (only because I develop in OS X. The goal is to provide Windows, OS X and Linux versions in general availability, same as Aether 1) as a test to see network function and find bottlenecks, if any. It will be followed by a beta release that is generally available, and from that point on, the auto-update system will keep your app up to date.
In short, by the end of August, the network should be transmitting data in some form.
Feature freeze
A feature freeze is a set date where a product stops adding new features before a launch date. This is an useful cutoff, because trying to cram as many features as possible into a launch can mean biting more than you can chew. There are some features that did not make it to the feature freeze deadline of August 1: the feature set that allows for ongoing moderator ballots in communities. These features are code complete for backend and frontend, but they will ship disabled in the first version until the client (UI and product integration) is complete. After the product stabilises and the network is operating correctly under real load, this is what comes next. The expectation is that they will make it into the first non-beta release.
Code freeze
A code freeze is when one stops writing any code and focus on bug fixes, finishing existing features, stability, and QA. The pre-release version will be the public QA test, and if everything goes right, the external QA test will replicate the results of the internal one that I've been running on my own local network. The rough code freeze date is August 15.
Pre-release
After the code freeze, the tasks that remain are those of packaging the app into an executable, updating the website, and setting up a bootstrap node so that newly downloaded nodes will have a point to bootstrap from.
Beta launch
This will happen when the pre-release gives enough confidence that the app is stable, and can handle nontrivial amount of users. Here, the major thing to keep an eye on is not that the app would not work (it does now, and it likely will continue to do so), but it is that the network is slow to deliver every post into every participating node. Since this requires fiddling with network variables (such as how often a node checks for updates in a minute) and is untestable in a synthetic environment, the only real way to know this is to actually have real people using it, and getting the real-world data on dispersion speeds.
From this point on, it gets a little hard to say, because it depends on the reception of the app, and the real world usage. If this app becomes a place that we all can enjoy spending our evenings in, I personally would consider it as success, even if it does not succeed in financially supporting me working full time on it. That said, the ideal path that I have in mind is that this would support me and a small core team (which does not yet exist, it's just me, for now) financially enough that we can devote our full focus to offer a mass communication platform that is free of the catches of what's available today. After all, Aether offers what they won't sell you at any price: your privacy.
Screenshots
Not mocks, actual app! 🙂 (It goes without saying that these are all works-in-progress.)
Addendum: Who's funding you / this work?
I'm currently working on this project full time, and I have been doing it for the past six months. I'm funding it through my own savings. My initial plan is that I want to release the full app, and then consider getting a job, because, well, my savings aren't infinite. That lines up nicely with the end of the majority of the development effort for the release, so for the initial release, I don't foresee any problems. After that, it all depends on how many people actually end up using it. If a lot of you guys do, then we'll find a way to make it work.
My motivation is not that there's money to be made out of this. If you're interested in that, ICOs might be a better bet, but I'd still recommend index funds over anything crypto-related, Vanguard is a good provider for that. I find most ICOs to be fairly distasteful (with some very rare exceptions), because I'd rather have my work speak than empty promises, as opposed to what ICOs typically do, hence my starting to speak about my work just now, when 80%+ of the total work is already done and implemented in cold, hard code.
If you are happy with any of the existing solutions solving similar problems out there, you should consider donating to them. There are a lot of open source creations for different kind of needs. They do often suffer from poor user experience and poor usability though, most of the time, which unfortunately renders them unable to gain adoption. If your project requires the user to compile it before running it, it's fair to say that your audience is not the general public. That is in itself not a bad thing, and if any of those projects actually satisfy your needs, consider donating to them. I'm focusing on better experience for the mass public, to provide all of us with more accountability, more transparency, more privacy, and hopefully some real discussion that we all sorely need — as a human society, we've gotten far too good at stifling each others' voices.
The reason I think I can make a difference is that I'm an ex-Google, ex-Facebok product designer with a lot of experience in the field, and while I have some ideas on what can be improved, I've learned through the long and hard way that the best ideas come from people who use the actual thing. I've also learned a second, almost as crucial other thing: people doing something that they genuinely, honestly care about tend to achieve much better results than a team of highly experienced specialists that are just meh about it, even if the first team is relatively inexperienced. I do have experience, and I do feel this problem every day, to the point that I was willing to quit my job and start working full time on it using my life savings. I'm lucky to be able to do that, but more than luck, I also care enough about this that I wanted to do that.
In summary, I think there's a need for it, and it should exist, in some form of another. Nobody else seems to be doing it, so I'm doing it. This won't make you rich, this definitely won't make me rich, but this has, if done right, has the chance to create an open, simple, free, decentralised, and private way of mass-communicating for the next half century — a public online gathering place that also happens to belong to the public, with no catches, and no kings.
If you think this is worth supporting, you can fund me via Bitcoin at: 1K12GwzAtPWa6W864fkm48pc4VNqzHXG4H, and via Ethereum at: 0x3A4eC55535B41A808080680fa68640122c689B45. Your funding will extend the amount of time I can stay working full time on it. I don't need the funding to finish it (I made sure that I can actually finish it with my own savings), and if you are a student, new graduate, or on a budget, you should not — keep your money, use it for yourself and use your money for good whenever you can. But if you consider yourself relatively well-off and thinking about how your money can do the most good in the world, and you're interested enough that you've read this far, I'd offer this as a pretty good bet.
I'll open a Patreon page, eventually, but I'm focusing on actually writing the code and releasing the app right now. The Patreon benefits I have in mind are mostly cosmetic, such as getting a 'benefactor' username class with some visual distinction, and priority in picking from available unique usernames. If you fund me through Bitcoin or Ethereum, you'll get the same benefits as Patreon supporters.
Lastly, if you're in San Francisco, and you're excited about this, have questions, hit me up, always happy to go for a coffee. Might take a while to schedule considering that I work 14-16 hour days, but we'll probably find a time.
As per usual, feel free to reach out to me with any questions at [email protected].
0 notes
ashleyhinesblog · 6 years
Text
Review GDPR Pro Software - NEW ONLINE SOFTWARE Removes GDPR Problems Immediately!
Review GDPR Pro Software 
Vendor: Mario Brown Product: GDPR Pro Software Launch Date: 2018-May-27 Launch Time: 11:00 EDT Front-End Price: $27 Niche: Software
Check Out The Demo Video
vimeo
Click for more info
1. GDPR Pro is the COMPLETE Package.
Just look at the features that come standard:
 Software brings your site into compliance on 8 Key GDPR requirements.
 Works with your blog or any other custom implementation of WordPress.
 Works with E-Commerce stores.
 The cookie requirement compliance assures your EU visitors are briefed about cookie policy.
 'Terms and conditions' policy compliance gets your visitors consent to your T&C.
 'Privacy policy' compliance creates consent requirement for your Privacy Policy.
 'Right to Forget' compliance lets you delete your user data manually.
 Option to refuse to accept EU traffic on your site (built into the plugin).
 User consent log, incase regulators come knocking on your door!
 You can use on up to 10 personal websites
2. You'll Be Protecting Your Visitors By Providing Them:
Right of Access: Your subscriber now has the right to request a copy of any of their information you have on file. Right of Erasure: Built into our plugin, add the form and users can delete all their data while on your website. Breach Notification: Your users automatically gets data breach notification - by law you have 72 hours of first having become aware of the breach. Cookies Consent: All cookies / pixels are automatically blocked until your website visitor give their consent. Right of Rectification: Your subscriber can update their personal data at any time. Right to Object: Simplified system where your subscriber can easy unsubscribe at any time. Terms & Condition Pages: Auto-generate privacy, terms and condition page with built in auto redirect until condition is met.
Tumblr media
Standard license includes up to 10 personal websites
3. Here’s what you get in GDPR Pro Package
Interviews by Leading GDPR Experts & Attorneys
Video Overview on what GDPR Pro is
GDPR Pro Wordpress Plugin
Install Access On 10 of Your Personal Websites
Tutorials on setting up and using your plugin
4. Here’s What the GDPR Pro Software Does:
Intercepts European Traffic with Geo Mapping Capabilities
EU Traffic Refusal Capability
Beautiful Pop-Ups That Collects Consent From EU Traffic
Consent Logging with Date and Time Stamp
Cookie Consent
Data Access Request Form: Your subscribers can request to see what data you have on them
Data Rectification: Your subscribers can update and amend the data you have on them
Data Deletion Request Form: Your subscribers can request to have their data deleted
“One Click” Data Breech Notification
Privacy Policy Template
Cookie Consent Template
Terms & Condition Pages
5. Aren't All GDPR Plugins The Same?
Tumblr media
No, that's why it is important to know the difference.
Many software plugins send EU traffic to your privacy policy.
...then have visitors check a box that says they agree to it.
Here's the problem (and it is a big one).
There is no recorded confirmation of consent.
Which leaves a gaping hole  in your protection!
What if the same person gives you consent, then two hours later or the next day, they email you and revoke that consent. Then a short time later, they consent again and are back on your list. Then they complain that they had revoked their consent and you are still emailing them. The only official record you would have for that person is the record of them revoking consent.
If regulators questioned you, you could be considered
NOT IN COMPLIANCE!
6. Here's the GDPR Pro Difference:
It dates and time stamps WHEN consent was given, which is absolutely critical.
GDPR Pro provides you with an official record that can be sent to regulators if needed AND you can produce it for visitors when they ask what data you have on them.
Here's the simple no-hype fact:
 GDPR Pro is a more "All Inclusive" version of compliance.
 GDPR Pro is what savvy business owners want.
Tumblr media
Standard license includes up to 10 personal websites
7. GDPR Pro Removes The Fear Of:
 Unnecessary legal risks.  Becoming a target for fines.  Attracting bad publicity.  Attracting unwanted investigation.
8. GDPR Compliance Is Essential For ANY Business Website
9. A Word Of Caution
In the spirit of complete transparency we need to say something. As powerful as this software is, it won't solve EVERY problem, for EVERY type of business site in EVERY instance. If you have a significantly custom implementation of WordPress or you make use of very complicated features in WP doing much more than the blogging platform is typically used for, our software may not be entirely sufficient. However, it will accommodate much of the primary compliance requirements, and can create the sections that can be very expensive and time consuming to build.
10. Frequently Asked Questions
Q. Can I use GDPR on any WordPress site? A. Yes, it works on just about all WP sites. Even custom versions of Wordpress should work. Q. Will I need technical help to do this? A. Not at all! We purposely designed it to be user friendly. Even someone new to WP should be fine. Q. Can I install this myself? A. Absolutely. Just refer to WP's own instructions for installing any plug-in, and watch our installation videos. Q. Do you plan any updates? A. Yes, GDPR Pro will be updated if needed. If WP makes changes in their install routine or other coding changes GDPR Pro will maintain compatibility. Q. Will this be ALL I need to remain compliant? A. The best advice here is to stay on top of any changes or new interpretations of the GDPR regulations. Ultimately, it is still your responsibility to keep abreast of what's going on.
Tumblr media
Standard license includes up to 10 personal websites
11. HIGH LEVEL BONUS NOW ADDED!
Tumblr media
These interviews are available no where else.
They're included in your purchase for a limited time.
Don't miss this opportunity to hear bonafide
experts share what's really going on with GDPR regulation.
12. Being Compliant Gives Your Business Security
EU visitors and others will trust your website more.
Enjoy improved sales and signups from EU visitors.
Obtain legal consent from visitors and be protected.
Not looking over your shoulder fearing legal issues.
13. Our Mission:
Presenting this new software to you is a privilege. While of course we're marketers, we are also a group of caring individuals and we want to make a difference.
From the day we embarked on developing this plug-in we had one goal.
Help fellow marketers and business people deal with this new compliance issue and simplify the process.
We did the research and designed our software so it was very comprehensive, yet easy to use. We added extra training and support so you would have a great user experience.
We stand ready to help you and want you to know, the sale is just a beginning for us. Our mission is to build lasting relationships with marketers all over the world. We hope you will join us.
Tumblr media
Standard license includes up to 10 personal websites
14. You're Getting All This If You Act Now:
New WordPress software plugin designed to immediately bring you into GDPR Compliance.
Variety of attractive GDPR software forms to choose from.
Leading authority interviews that answer your questions and provide further understanding.
Video Installation Training and practical tips for usage.
Ongoing support.
Use on up to 10 personal websites
To our knowledge, there's no more comprehensive GDPR Toolset out there.
We set a high goal and early testers are telling us we succeeded!
15. No-Risk 30-Day Guarantee
We have one goal and one goal only; To provide with the best solution to a worldwide marketing challenge. Being in compliance with these new regulations is serious business and We take our responsibility to you just as seriously. Our first concern is your complete satisfaction. We value our customer relations. To make your purchase as risk-free as possible, we provide you with a No-Risk 30-Day Guarantee. If you don’t like the software for any reason, notify us within 30 days and we will refund 100% of your purchase price. No hassles, no delays!
Tumblr media
Standard license includes up to 10 personal websites
from SOFTWARE MAKETING ONLINE http://www.mmosoftware.net/2018/05/gdpr-pro-software.html
0 notes
jvzooproductsclub · 6 years
Text
GDPR Pro Review | Demo | Huge Bonuses | Why is this different from Cyril’s GDPR Fix Plug?
GDPR Pro Review | Demo | Huge Bonuses | Why is this different from Cyril’s GDPR Fix Plug?
Learn more here: http://mattmartin.club/index.php/2018/05/27/gdpr-pro-review-demo-huge-bonuses-why-is-this-different-from-cyrils-gdpr-fix-plug/
Welcome To MattMartin.Club!
Thank You So Much For Taking The Time In Checking Out My Review On "GDPR Pro"
Hope You Will Enjoy It!
New KILLER Software Makes You GDPR Safe – In Minutes!
Overview:
Product Creator Mario Brown Product Name GDPR Pro Front-End Price $27.00 Niche WordPress Plugin Bonuses YES! Huge Bonus Package Listed Below Refund 30 Days Money Back Guarantee Recommendation Yes, 100% from Matt Martin 🙂 Launch Date 2018 – May – 27 @ 11:00 AM EST Official Website Checkout "GDPR Pro" Official Site
Click Here @ 11 AM EST on 2018-May-27 To Get An Early Bird Discount On “GDPR Pro” Along With My Exclusive Bonuses
In case you’ve been hibernating for the last 6 months, GDPR is the European Union’s attempt to regulate how customer and visitor data is handled by marketers, like you and me. If will affect the BIG guys of course, but also the little guys, too. Complying with the new GDPR requirements can present a real challenge…Unless you have plenty of time and money. And are willing to wade through all the panic peddlers… and phony experts… and the misinformed free advice.
There are website owners who say they don’t care about the new GDPR regulations. They say, “it’s thousands of miles away, what do I have to fear?”…screw ’em!” Its sounds like tough talk, I know, but it’s a mistake. Why?
Because they’re taking a great risk for very little gain. If these business owners are right, they still gain nothing, but don’t get fined or harassed. BUT, If they’re wrong, watch out! Now they face scrutiny that could cost them time and money and… even injure their reputation. How is that a good bargain? Let’s find out in my GDPR Pro Review below!
Related Post: WP GDPR Fix Review – New plugin gives you max GDPR compliance
GDPR PRO REVIEW – WHAT IS IT?
I’ll bet you’ve noticed what I’m seeing the last week or so. Emails announcing updates to Website Privacy Policies. They usually start out with: “…We value your privacy and want you to understand how Yelp uses your information. To that end, we’ve updated our Privacy Policy to make sure you have current and accurate information about [Blanks] privacy practices, and the ways you can control how your information is used through our services…”
Of course these internet businesses are doing this for ONE reason; The GDPR Regulations go into effect this week. So, they want to cover their “Assets”, if you know what I mean. All these companies have a lot at stake. They don’t want to risk the scrutiny, the potential fines and the hit to their image if they’re not in compliance.
But what about your business websites? Are you protected, or are you going it alone, risks and all? You don’t have to take the risk nor do you have to spend a lot of time and money to get in compliance. You need ONE thing; GDPR PRO WordPress software plugin.
GDPR PRO WP software Plugin erases all those issues- instantly. This New powerful plugin makes you compliant with GDPR on 7 key features. Just install GDPR Pro and you’ll instantly be compliant with all those requirements. Yes, instantly. Forget about spending money on expensive developers or annoying lawyers… its just not necessary. GDPR PRO makes compliance so easy you’ll wish you’d never hesitated. As they say, this is a “No-Brainer”.
GDPR Pro Rating - 9.3/10
Quality - 9.5/10
Features - 9.5/10
Support - 9.5/10
Easy to use - 9/10
Bonus - 9/10
Summary
This rating only shows our ideas about this product, we strongly recommend you firstly see the demo/preview to get the whole picture. Remember, You're also be backed by 30 Day Money Back Guarantee No Question Asked! You've got nothing to lose. TRY IT TODAY! What's your thought? Please let us know!
ABOUT AUTHOR
  To many online marketers, Mario Brown must be a familiar name since he has created many trending products targeting multiple niches. In case you don’t known, Viddictive 2.0, DealCount, Visualai, Storie, Vidoyo, Ad Quiz Video, Commission Evolution, etc. were all released under his name.
During his career, Mario Brown has gained his available reputation for achieving many outstanding achievements. I strongly believe that GDPR Pro Software will sooner or later become a bestseller in the marketplace. The following part of my GDPR Pro Review is going to focus on its functionalities.
FEATURES OF GDPR PRO
Here are what you will get inside this product:
New WordPress software plugin designed to immediately bring you into GDPR Compliance.
Variety of attractive GDPR software forms to choose from.
Leading authority interviews that answer your questions and provide further understanding.
Video Installation Training and practical tips for usage.
Ongoing support.
GDPR Pro is the COMPLETE Package. Just look at the features that come standard:
Software brings your site into compliance on 7 Key GDPR requirements.
Works with your blog or any other custom implementation of WordPress.
Works with E-Commerce stores.
The cookie requirement compliance assures your EU visitors are briefed about cookie policy.
‘Terms and conditions’ policy compliance gets your visitors consent to your T&C.
‘Privacy policy’ compliance creates consent requirement for your Privacy Policy.
‘Right to Forget’ compliance lets you delete your user data manually.
Option to refuse to accept EU traffic on your site (built into the plugin).
That means You’ll Be Protecting Your Visitors By Providing Them:
Right of Access: Your subscriber now has the right to request a copy of any of their information you have on file.
Right of Erasure: Built into this plugin, add the form and users can delete all their data while on your website.
Breach Notification: Your users automatically gets data breach notification – by law you have 72 hours of first having become aware of the breach.
Cookies Consent: All cookies / pixels are automatically blocked until your website visitor give their consent.
Right of Rectification: Your subscriber can update their personal data at any time.
Right to Object: Simplified system where your subscriber can easy unsubscribe at any time.
Terms & Condition Pages: Auto-generate privacy, terms and condition page with built in auto redirect until condition is met.
WHY SHOULD YOU GET IT?
Many software plugins send EU traffic to your privacy policy… then have visitors check a box that says they agree to it. Here’s the problem (and it is a big one). There is no recorded confirmation of consent. Which leaves a gaping hole in your protection! What if the same person gives you consent, then two hours later or the next day, they email you and revoke that consent.
Then a short time later, they consent again and are back on your list. Then they complain that they had revoked their consent and you are still emailing them. The only official record you would have for that person is the record of them revoking consent. If regulators questioned you, you could be considered not in compliance! That’s where GDPR Pro make it different with other plugins.
It dates and time stamps WHEN consent was given, which is absolutely critical. GDPR Pro provides you with an official record that can be sent to regulators if needed AND you can produce it for visitors when they ask what data you have on them. Here’s the simple no-hype fact:
GDPR Pro is a more “All Inclusive” version of compliance.
GDPR Pro is what savvy business owners want.
GDPR Pro gives you more design choices, too!
In short, GDPR Pro Removes The Fear Of:
Unnecessary legal risks.
Becoming a target for fines.
Attracting bad publicity.
Attracting unwanted investigation.
Now let’s take a look at the benefits come up with:
EU visitors and others will trust your website more.
Enjoy improved sales and signups from EU visitors.
Obtain legal consent from visitors and be protected.
Not looking over your shoulder fearing legal issues.
In addition, you will be getting tons the vendor’s greatest bonuses for your fast action:
Is it enough awesomeness for you? Because you will be also receiving my ULTIMATE huge bonuses. Those treasures are waiting for you at the end of this GDPR Pro Review. And even though you do nothing but only read my GDPR Pro Review, to thanks for your kind support, I still give you free bonuses. So keep reading then scroll your mouse down!
HOW DOES IT WORK?
GDPR Pro works on just about all WP sites. Even custom versions of WordPress should work. Just refer to WP’s own instructions for installing any plug-in, and watch the installation videos. They purposely designed it to be user friendly. Even someone new to WP should be fine.
Let’s check out the demo video to see it in action!
vimeo
GDPRpRO-DEMo from Offline Sharks on Vimeo.
Why is this different from Cyril’s GDPR Fix Plug In Video:
vimeo
WP GDPRPRO Demo 2 from Offline Sharks on Vimeo.
WHO IS IT FOR?
Bloggers
Do EU residents visit your site and interact, leaving comments and posts, and emailing you, etc? Do you track your users? Do you store their data, their IP addresses? Or integrate with 3rd party websites that uses cookies? Then you must be GDPR compliant.
For Affiliate Marketers
Do you capture leads on your site? Do you store visitor’s names and emails? Do you use analytics, Facebook or Twitter pixels? Then you must be GDPR compliant.
E-Commerce Sellers
Do you invite visitors to create accounts and share their emails or contact details? Do you track people through cookies? Then you must be GDPR compliant.
For Business Branding Websites
Even sites that don’t sell or accept payments or collect data, come under GDPR dictates. If you have cookies (or record IP addresses) or other features enabled on your site you will need to be in compliance. Then you must be GDPR compliant.
Related Post: GDPR Suite Review – Get GDPR Compliant In Less Than 5 Minutes
PRICE AND EVALUATION
For a limited time, you can grab GDPR Pro with early bird discount price in these options below. Let’s pick the best suited options for you before this special offer gone!
Front End: Main Plug-In + Training: $27 >> See Details <<
OTO1: PRO Version, More Features & Licenses: $47 >> See Details <<
OTO2: Agency License: $67 >> See Details <<
OTO3: GDPR Marketing Package: $37 >> See Details <<
OTO4: SSL Sniper Software: $27 >> See Details <<
Let’s act now, don’t delay and grab it now while it’s still at the lowest price possible! And Just feel free to give it a try, because You have a full 30 days to put this to the test and make sure that this is for you. If you do not see any results within this period then please reach out to them. The Helpdesk Team is always there to help you out and make sure that you have been following the correct procedures.
GDPR PRO REVIEW – CONCLUSION
In summary, I hope that all of the information in my GDPR Pro Review can help you gain more understanding about this product and then be able to make a wise choice. If you’re ready to start making a real online income in the most passive way possible then click the button below before the price rises. I am look forward to seeing your success.
However, in case you are in need of any advice, please feel free to keep in touch with me anytime. Regardless, thank you for reading my GDPR Pro Review. Goodbye, and see you again!
Click Here @ 11 AM EST on 2018-May-27 To Get An Early Bird Discount On “GDPR Pro” Along With My Exclusive Bonuses
  4 SIMPLE STEPS TO CLAIM YOUR BONUS PACKAGE
1. Clear Your Cookies in your Web Browser (Ctrl + Shift + Delete) 2. Purchase Products Through My Email/Website 3. Contact Me Here with the receipt of your purchase 4. ALL Bonuses in General Internet Marketing Bonuses Package is Yours & You will receive them within 12-48 hours.
#Gdpr, #Gdpr_Fix, #GDPR_Pro, #GDPR_Pro_Bonus, #GDPR_Pro_Launch, #GDPR_Pro_Review, #Jvzoo, #JvzooProductReview, #JvzooProducts, #ProductReview, #Why_Is_This_Different_From_CyrilS_GDPR_Fix_Plug, #Wordpress, #Wordpress_Plugin, #WP_GDPR_Plugin, #WP_Plugins
0 notes
mbaljeetsingh · 6 years
Text
Native And PWA: Choices, Not Challengers!
It’s hard to tell exactly where the rift between “native” and “web” really started. I feel like it’s one of those things that had been churning just below the surface since the early days of Flash, only to erupt more recently with the rise of mobile platforms. Regardless, developers have squared off across this “great chasm,” lobbing insults at one another in an attempt to bolster their own side.
I have no interest in that fight. Sure, I’m a “web guy,” but that doesn’t mean I can’t see the appeal of native development; I’m also a software developer. Above all, though, I’m a pragmatist. I realize every project is different and that our approach to each should be tailored to the project’s needs and goals.
With Progressive Web Apps (PWAs) encroaching on native development’s turf, I thought this might be a good time to step back and take stock of these two approaches to building products. My hope is that we will all walk away with the ability to clearly see the strengths of each approach in hopes of finding the right fit for the products we create.
Since pretty much the beginning, web-based experiences have been compared with everything from desktop software (the original ”native apps“) to interactive CD-ROMs to Flash and, most recently, to mobile apps. Despite being declared dead on numerous occasions, the web has persisted. In many cases, it’s even outlived its alleged killers (R.I.P. Flash).
One of the web’s chief strengths in this regard is its adaptability. It’s been able to go pretty much anywhere there’s an Internet connection and continues to gain new capabilities. All programming languages evolve, so it’s not unexpected, but over time the web’s growing borders have continued to encroach on traditional software’s turf.
One area where the web has historically come up short, however, has been in the realm of performance. Installed software is able to tie into the underlying operating system in ways the web simply can’t. They’re written in the lingua franca of their host, with direct access to the hardware or “closer to the metal” as we often say. The web, which nearly always operates one or more layers of abstraction above that, has had a hard time competing when it comes to performance. While the performance gap has narrowed over time, native code is likely to continue running faster than web code, at least until the web becomes capable of interpreting signals directly off the pins of the hardware.
Hand-in-hand with the performance advantage, native development has far greater (and earlier) access to device features such as NFC, Bluetooth, proximity and ambient light sensors, and more. The web is steadily gaining access to these features as well, but it will always lag behind native because the native APIs need to be developed before the web can tap into them and standardization across the browser-scape takes time.
Additionally, native code can hook into OS-level features like the address book and calendars. Push notifications was another big one, but Service Workers now enable the web to take advantage of that feature as well. Payment processing has similarly improved on the web recently. Perhaps address book and calendar access will come to the web eventually as well.
Circling back to Service Workers for a moment, this recent addition to the web developer’s toolbox has a number of other tricks up its sleeve, too. First of all, it offers a much more robust caching system than the web had previously with AppCache. You can use Service Workers to manage offline requests, cache specific resources, sync data with a remote server when the user doesn’t even have the site open, and a ton more. Perhaps more than any other single technology, Service Workers have enabled the web to offer a more app-like experience.
Service Workers are one of the three technical lynchpins of PWAs. Another one is the Web App Manifest. While it may sound a little boring, a Web App Manifest is actually an incredibly powerful tool in that it enables a website to advertise itself as an app. This relatively straightforward JSON file format provides a wealth of information about the website it describes and enables PWA-aware browsers and operating systems to install the site as though it was native software.
Some app stores are even beginning to index PWAs, using their Manifest to populate their associated entries. From a user perspective, PWAs in app stores aren’t any different than the native apps surrounding them. They are installable, un-installable, and can even expose their settings to the underlying operating system’s app management tool. It’s also worth noting that PWAs don’t actually need a user to explicitly install them in order to use them because, well, they live on the web.
Being both on and of the web also means PWAs are always up to date. Users won’t need to actively download anything new to access new functionality. And even when new content and features do get rolled out, it’s highly unlikely a user would need to re-download your entire PWA as they would in the case of most native apps. If anything, they may get a few new assets and some new HTML, and it would happen pretty instantaneously, no app store required. Of course, you still have the discovery and distribution option provided by app stores, so really it’s the best of both worlds.
Being in app stores puts PWAs on equal footing with native apps in terms of discovery, distribution, and monetization. In fact, it may even vault the web over native as PWAs are also discoverable via search engines and are infinitely more shareable than apps because they exist at a URL. When well-built, PWAs are also interoperable across browsers, platforms, and devices. PWAs even work in browsers that don’t support features like Service Workers, because PWA features are progressive enhancements.
The web also offers very mature accessibility support, making it relatively easy to ensure your projects are usable by the broadest number of users. Complex interfaces do require a little more diligence when it comes to programming, but the accessibility benefits afforded by semantic HTML handle baseline accessibility with aplomb — especially when it comes to text-heavy, informational or simple form-based products. By contrast, you nearly always need to be aware of and incorporate accessibility APIs into your native code.
On the topic of development, I don’t think there’s a clear winner when it comes to development experience. Every language has its fans, and the same can be said for developer tools. You like what you like, and you tend to be more efficient with the tools and languages you know and are passionate about. Neither the web nor native development has any distinct advantage there.
Where native development does shine, however, is when it comes to ensuring a consistent level of quality for UIs, built using their SDKs (Software Development Kits). Most native SDKs offer tools for testing performance, design, functionality, and more. They also include boilerplate code, design systems, and other tools that help raise the overall bar of native software offerings. Sure, there are similar tools for the web, but they are scattered across the web and aren’t universal across all of the different development environments folks use to build websites. There is no single entity defining quality web experiences and providing the tools to build them (though many have tried).
When it comes to staffing the development of a product, it’s definitely easier to hire folks who know how to build for the web. As I type, the Programming Language Index currently ranks JavaScript as the most popular language, with Java right behind it. C# is in 5th place, HTML in 7th, CSS in 9th, and Swift comes in 15th. This index cross-references Stack Overflow tags with lines changed in public repositories on GitHub, so it should be taken with a grain of salt, but it provides a pretty clear indication that many folks know (and use) web technologies. On the flip side, it can often be challenging to find and hire talented native developers because there are fewer of them and they are in high demand.
Scarce talent means you’ll likely end up paying more for native development. Every project is obviously different and has different features and requirements, but it can be illustrative to look at average development costs as a comparison. Comentum suggests that building a moderately-sized web app ranges from under US$10,000 to US$150,000. On the native end, Appster estimates that moderately-sized mobile app projects cost between US$110,000 and US$305,000 to build. It’s probably safe to assume native projects are likely to cost about twice as much to develop as a web project. And that’s per platform. Native apps also typically take longer to develop.
It’s worth noting that there are options for building native software using web technologies without building a PWA. Frameworks like React Native, PhoneGap, Ionic, and Appcelerator Titanium enable you to generate native code from HTML, CSS, and JavaScript. Using one of these tools could lower your staffing and development costs when compared with hiring a team of native developers, but in terms of access to device features your project will be limited to the ones the framework has implemented. Plan accordingly.
Once the app is developed, you also have to account for ongoing maintenance costs of said app or apps. In response to a survey run by Clutch, Dom & Tom recommends budgeting 50% of the product’s initial price in the first year, 25% in the second year, and between 15-25% for every year after.
Once you have a decent grasp on how web and native development compare and contrast, you can begin to assess which strengths and weaknesses are relevant to your project. You’ll likely have to make some tradeoffs, and that’s to be expected. There is no one-size-fits-all solution. And if there was, it wouldn’t fit anyone well.
Let’s run through a couple of hypothetical projects to bring the distinctions between development for native and PWA more clearly into focus.
Project #1: An Existing Native App
Let’s say you’ve already gone through the process of building out a native app. If everything is going well, there’s no reason to change course. Don’t throw out all of your existing work to switch over to a PWA unless you have a really good reason to do so. I can only really think of one scenario that might warrant considering a switch to PWA: Bringing support for a new OS into the mix. And even then, it only really makes sense if your app’s needs can be met by the web alone.
If you are adding support for a new platform to a product, it creates the perfect opportunity to evaluate the needs and goals of the project with regard to the cost of meeting those needs. If the web isn’t up to the task, stick with native. If it is, however, pause and consider this: Adding support for the new platform using a PWA would allow you to support additional platforms (including the web) down the road and could even enable you to replace your existing native application once the PWA has been thoroughly tested.
If replacing an existing native app with a PWA seems unthinkable to you, consider this: Starbucks and Twitter are already exploring this idea.
If there are specific reasons you need to keep your apps native, it can still worth considering “outsourcing” certain app features to the web. A few years back, I was working on a project for a large financial services firm, and they opted to move the login flow for their native apps to the web in order to enable them to roll out security features more quickly than the typical app store approval process enabled them to achieve. Perhaps your project has similar needs that the web could empower your native app to meet.
Project #2: An Existing Cross-Platform App
If you’ve got an existing app that works cross-platform, you”re likely shelling out a lot of money for the ongoing development and maintenance of that app. You’re also likely seeing some divergence in-app features as each native platform tends to have its own development timeline. The app for the retailer Target, for instance, currently allows users to manage a shopping list on iOS, but the Android version doesn’t have that feature. In many ways it’s similar to the divergence we sometimes see between the “desktop” and “mobile” versions of a website.
If the web is already part of your cross-platform mix, it provides a good opportunity to double down on your investments there and consider replacing your native apps with PWAs. Tools such as sonar and Lighthouse can give you insight into how well-prepared your existing site is for PWA-ification and they can also tell you what you need to work on. From there, turning your website into a PWA is relatively straightforward. There are even free tools that can help you upgrade your site to a PWA in a few short minutes. If it’s not, however, there’s really not much incentive to make this move unless the feature divergence between platforms becomes really egregious or you are considering adding yet another native platform (or the web) into the mix.
Project #3: A New Cross-Platform Product
If you’re kicking off a new project aimed at more than one platform, creating and maintaining it in one place as a PWA should definitely be on the table. Depending on your budget and staff, it’s likely to stretch your budget the furthest. That said, if your product requires a direct connection to hardware or the underlying OS, you may still need to go native. But before you go that route, make a list of all of your requirements and then verify what the web can do (and what it can’t). Be sure to check for support in more than one browser too.
Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.
Table of Contents →
Project #4: A New Hyper-Focused Product
If you are building a new product and part of that product’s whole purpose is its deep connection to a particular platform, by all means, build for that platform. For instance, if your product relies on Apple’s Messages platform or integration with HomeKit, by all means, build for iOS and/or macOS in Swift. If your product will best meet user needs via a widget or you’re building a custom launcher, you’re best off building Android, and you’ll need to use Java.
Not all native features are walled gardens, however. While Amazon’s Alexa, Apple’s Siri, and the Google Assistant all require native code (or a JSON API) to integrate with your app, interestingly Microsoft’s Cortana will voice-enable your PWA using only an XML file linked from the head of your HTML. Perhaps others will follow their lead.
PWA Or Native? The Choice Is Yours
The web and native each have a lot to offer. If you were to ask me which is better, I’d simply reply “It depends” because it does. I’m not being evasive or noncommittal; figuring out which is the right fit for your project depends entirely on the specific needs of your project. It requires taking into consideration what you are building, the composition of the existing team tasked with building it or the team you will need to hire to do so, and the budget you have to work with. In many cases, the web likely offers everything you need to accomplish your project’s goals, but there are always exceptions. If you want to explore the possibilities the web offers, I’ve included some resources at the end of this article.
The most important thing you can do when weighing different approaches to software development — or different frameworks, libraries, languages, design systems, etc. — is to consider your options in relation to the project at hand. Do your research and weigh your options. And don’t allow yourself to be swayed one way or another by clever marketing, sexy demos, or rabid fanboys. Including this one.
PWA Builder A 3-step website-to-PWA creation tool with helpful recommendations and recipes. It also enables you to turn your PWA into installable native apps for Windows, Android, and iOS.
A Progressive Road Map for your Progressive Web App Jason Grigsby on how his team began incorporating aspects of PWAs into their website over the course of several months, nicely demonstrating how the different features can be added a bit at a time.
Yes, That Web Project Should Be a PWA An overview of the UX opportunities (and risks) of PWAs, written by yours truly.
Progressive Web Apps on MDN A hub for all of the technical bits you need to know about what characterizes a quality PWA.
What Web Can Do Today Take a look at the APIs your device, OS, and browser support. What you find might surprise you.
Can I Use The definitive database of what APIs and features are available in every major browser and how that support measures up relative to the browsers people are actually using. It can also give you an excellent view back in time to see how backward compatible certain features are.
(rb, ra, hjc, il)
via Articles on Smashing Magazine — For Web Designers And Developers http://ift.tt/2nTEVt0
0 notes
gta-5-cheats · 6 years
Text
Honor View 10 Review
New Post has been published on http://secondcovers.com/honor-view-10-review/
Honor View 10 Review
(adsbygoogle = window.adsbygoogle || []).push();
2017 was the year when most smartphone manufacturers began shifting to 18:9 displays. Honor was one such company, launching one phone after another following this trend. We saw the Honor 9i (Review) and the Honor 7X (Review) being introduced in the sub-Rs. 20,000 segment. The company is now targeting a higher price point with its Honor View 10.
The Honor 8 Pro (Review), its flagship for 2017, managed to shake the market up while competing against the OnePlus 5 (Review). It packed in good hardware and managed to undercut the OnePlus offering on price as well. Now, the Honor View 10 is on the same path, and offerse better hardware to take on the competition. It is powered by Huawei’s latest silicon, the Kirin 970, and has a stronger focus on artificial intelligence with what the company calls a “neural-network processing unit”. Honor claims that the new chip is capable of learning your behaviour patterns, helping you take better photos, and translating multiple languages in real time. So should the View 10 be your smartphone of choice for 2018? We find out.
youtube
  Honor View 10 look and feel
The design of the Honor View 10 is in line with the current market trend of taller screens and narrower borders. It sports a big 5.99-inch display with the 18:9 aspect ratio. It has thin borders on the side and comparatively thicker ones on the top and the bottom. The selfie camera and earpiece grill are above the screen, along with an array of sensors, and there’s a fingerprint scanner below it. Honor offers the View 10 in two colours: Navy Blue and Midnight Black. We had a Navy Blue review unit, and it looked somewhat similar to the colour of the Honor 8 Pro. It’s definitely different and eye-catching.
The View 10 sports a metal unibody. Its flat back has antenna bands running along the top and bottom. It has dual cameras and a single-LED flash at the back, similar in design to the setup seen on the Honor 7X. The lenses protrude out of the body causing the phone to rock when kept on a flat surface. They both have metal surrounds which protect them but feel rough.
The sides are curved making the phone comfortable to hold, although the flat metal back can be a little slippery in the hand. Honor has positioned the power and volume buttons on the right, and while the power button is easy to hit, you will need to stretch your thumb to hit the volume up button. The hybrid dual-SIM tray is on the left. The View 10 sports a USB Type-C port at the bottom along with a speaker grille and 3.5mm headphone jack. At the top, the View 10 has a secondary microphone and an IR emitter that can be used to control IR devices.
The Honor View 10 has a 3750mAh battery and supports the company’s own Supercharge standard. Sadly, Honor does not ship the required 5V, 4.5A charger with the phone in India. Instead, you will find a standard charger in the box that isn’t as fast. You get a screen protector pre-applied on the View 10, and a clear case bundled in the box.
Honor View 10 specifications
The View 10 is Honor’s new flagship offering, and like other phones at this price, it is loaded with features. The screen is an IPS panel, measuring 5.99 inches with an FHD+ (1080×2160) resolution. Viewing angles are good and the screen was usable under direct sunlight. We liked the output of the display and the fact that the phone let us adjust the colour temperature. Display modes are also provided, which let you choose between neutral and vivid colour reproduction.
The fingerprint scanner below the display is quick to unlock the phone. Honor also claims that the View 10 is capable of using your face to unlock the smartphone, but this feature is not enabled yet. At the time of our review, the Face Unlock feature could only be used to show notifications on the lock screen. Honor told Gadgets 360 that the full feature will be rolled out via an OTA update.
The Honor View 10 is powered by a Huawei Kirin 970 SoC, which is an octa-core processor, with four cores clocked at 2.36GHz and the other four clocked at 1.8GHz. The Kirin 970 has a dedicated Neural Network Processing Unit (NPU) which is tasked with handling Artificial Intelligence functions. Huawei claims that the dedicated NPU computes AI tasks faster than the CPU, while being more efficient. The Face Unlock feature and the cameras on the Honor 10 use this NPU. Honor ships the View 10 with 6GB of RAM and 128GB of storage, which is expandable by up to 256GB using a microSD card in the hybrid dual-SIM tray.
The Honor View 10 supports 4G as well as VoLTE connectivity on both SIMs, which means that both cards can access their respective 4G and VoLTE networks. With most current phones, only the SIM in the primary slot can access a VoLTE network, but this doesn’t seem to be the case with the View 10. The phone could register our Jio and Airtel SIMs on their respective networks independently. However, this isn’t the same as being a Dual-SIM Dual-Active phone, because when one network is engaged in a phone call, the other SIM is still unavailable to anyone who tries to reach you. Honor has come up with a call forwarding option that lets you transfer calls to the active number automatically. For data, of course, the phone only lets you select a single network at a time.
The View 10’s dual rear cameras consist of one 16-megapixel RGB sensor and one 20-megapixel monochrome sensor. At the front, there is a 13-megapixel selfie shooter that is also used for the Face Unlock feature. In terms of connectivity, the View 10 supports Bluetooth 4.2, dual-band Wi-Fi, NFC, and USB-OTG. For positioning, it has GPS, APS, GLONASS, and BDS.
Honor View 10 software and features
Like all other Honor phones, this one runs EMUI, Huawei’s UI layer on top of Android. The View 10 runs EMUI 8 with quite a few customizations on top of Android Oreo. There is theme support, letting you change the way the UI looks to suit your preferences. Smart gestures help you interact with the phone, such as flipping it over to mute it and picking it up to the screen.
Knuckle gestures let you double-knock with one finger to take a screenshot, or two fingers to begin screen recording. You can also launch apps by tracing letters on the screen. If you think this sounds complicated, it is. Honor has crammed in a lot of such features, and while the screenshot gesture was useful, the others felt gimmicky. We also found a lot of sub-menus in the Settings app, and it was sometimes hard to get to a specific setting without using the search function.
Shop On SecondCovers
Honor offers a one-handed mode which is very useful when handling this big phone. You can also opt to disable the on-screen navigation buttons and instead tap, long press, or swipe the fingerprint scanner to simulate the Back, Home, and Overview button actions respectively. While this is fairly easy, we found it less convenient than a similar implementation on Motorola phones.
For security, Honor offers a Private Space that lets you have a completely different profile secured using a different fingerprint and passcode. File Safe encrypts files on the phone which can then only be decrypted using an alphanumeric passcode or an associated fingerprint, and App Lock lets you restrict access to apps using a fingerprint. While you won’t have to search for multiple apps to do these things if you want them, learning all the features of this phone can be a little overwhelming. You also get dual app functionality called App Twin which lets you run two instances of supported apps.
Honor ships the phone with a few preinstalled apps. Apart from Honor support apps, there are quite a few demo versions of games. To showcase its AI capabilities Honor has also preloaded Microsoft Translator which is not the same as the version available via the Google Play store. While you can chat with other people using text and have it translated on the other end using the regular app, you can also do this with voice on the View 10. It isn’t clear whether this feature is coming to all phones or whether it is exclusive to the View 10 (or other devices with specific capabilities), but we were able to use it with the other party using a Google Pixel 2. It worked fairly well translating between English and Hindi, though context was sometimes missed.  
Honor View 10 performance, cameras, and battery life
Flagships aim to provide the best usage experience, and the Honor View 10 isn’t any different. Huawei’s Kirin 970 processor is quite potent and the 6GB of RAM makes usage quite smooth. Apps launch quickly, and multitasking is easy. The View 10 managed to score 173,982 in AnTuTu, and also scored 1,900 and 6,709 in Geekbench 4’s single-core and multi-core tests respectively. This phone managed to surpass the Google Pixel 2 (Review) while coming within striking distance of the OnePlus 5T. In GFXBench, the phone managed 59fps, and 31,347 points in 3DMark Unlimited.
We played Shadow Fight 3 and Clash Royale, and faced no issues with gameplay. Like most other phones with 18:9 screens, many apps and games don’t use all available space and run at 16:9. Honor gives users the option to scale compatible apps to the new ratio. EMUI’s Game Suite feature claims to boost the phone’s performance when gaming, and also suppresses notifications to prevent interruptions and disables the navigation keys to prevent accidental touches. The speaker on the View 10 is loud but is also very easy to cover when holding the phone in landscape mode.
Battery life on the Honor View 10 is decent thanks to the 3750mAh battery. We managed to run the phone past one day with light to medium use, and it could manage one day when we included some long gaming sessions. In our HD video loop test, the phone lasted 11 hours, 14 minutes before running out of juice. EMUI has battery saving modes that limits background activity, disable automatic syncing, and more. Ultra Power Saving Mode restricts the phone to only calls and messages so that it can run for significantly longer. Just like the Samsung Galaxy S8 and the Samsung Galaxy S8+, Honor lets you lower the resolution from FHD+ to HD+ which should be beneficial to battery life. It also has a smart setting which automatically switches resolutions based on what you are doing with the phone.
Other than face recognition and the translator app, the cameras are where the Honor View 10 puts its NPU to use. The camera app will seem familiar if you’ve used other Honor devices, but look deeper and you’ll see that there are more options and an AI-powered scene detection mode. The View 10 uses the NPU to try and understand subjects in a frame and then use the best possible settings according to its algorithms. We found the AI-powered auto mode to be accurate and fast enough to set the scene up before we hit the shutter button.
There’s an Artist mode that lets you apply filters before taking shots. Monochrome mode utilises only the 20-megapixel monochrome sensor to take photos, and the AR mode applies effects after detecting faces. There is Pro mode for photos and videos that lets you manually set different parameters of the camera.
Tap to see full-sized Honor View 10 camera samples
Photos taken with the View 10 are good in daylight when the lighting is favourable. Landscapes are good and colour reproduction is fairly accurate. Objects at a distance do lack detail, but macros are far better in comparison. The camera is quick to lock focus and captures details very well. The View 10 does detect specific subjects such as plants and animals rather than just conditions such as low light, and the resulting settings are fine. However, there’s no way to turn this off to gauge how much of a difference it really makes compared to standard scene detection that many other phones offer. In low light, the View 10 managed to surprise us in a few shots. It was able to capture good details and handled noise fairly well. If the light source is far away, the camera is quite aggressive with noise reduction, which results in loss of detail.
Selfies are usable, and can be enhanced using the beautify mode. Most of the time, the portrait mode worked well in blurring backgrounds and making subjects stand out. If there are multiple faces in a frame, the phone attempts to leave all of them unblurred regardless of depth, which looks extremely artificial.
Video recording maxes out at 4K for the rear camera and you do also have the option to shoot at 1080p at 60fps. The front-facing camera maxes out at 1080p but if you want beautification mode enabled on video, the output is restricted to 720p. Also, we couldn’t find any video stabilisation options on the phone so you will need to have a steady hand while recording.
Honor View 10 in pictures
Verdict With the View 10, it is clear that Honor is being extremely aggressive in terms of features and pricing. It packs in the latest silicon from Huawei and the AI buzzwords will grab attention in the market. The hardware is all quite capable and is in line with current flagships from other manufacturers. Honor ships the phone with Android Oreo out of the box which is another plus. Priced at Rs 29,999, the Honor View 10 undercuts the OnePlus 5T (Review) on pricing by quite a bit. If you are on the 18:9 hype train looking for a bargain, the Honor View 10 looks like a strong contender at its price.
(adsbygoogle = window.adsbygoogle || []).push();
0 notes
lunar-vape · 7 years
Text
TPD is here - how will it affect vapers?
New Post has been published on https://www.lunar-vape.com/2017/05/19/tpd-is-here-how-will-it-affect-vapers/
TPD is here - how will it affect vapers?
May 20th 2017 marks the day on which the European Tobacco Products Directive comes into full effect and enforcement. Retailers have had a year to sell their old non TPD compliant stock, and will from now, only be allowed to sell e-liquid and tanks deemed to be suitable for the European/UK Market.
“…any man’s death diminishes me, because I am involved in mankind, and therefore never send to know for whom the bells tolls; it tolls for thee.”
Donne, John Meditation 17, Devotions upon Emergent Occasions (1624)
  Yes the bell tolls. It tolls for thee, for me and indeed it tolls for ‘we’
As a consumer myself I’ve spent the past year or so fuming, raging, and reconciling myself (grumpily) to the changes we’re going to have to get used to. I’d expect that most people reading a vape related blog will already be clued up on how these changes will directly affect vapers. However, many vapers will have been happily tootling along completely unaware of TPD and the changes on the 20th of May might come as a surprise. So here’s the basic lowdown on the whole grubby affair and how it will directly affect the consumer:
  E-Liquid
E-liquid containing nicotine can now only be sold in 10ml bottles. These bottles also have to comply with certain packaging requirements. They are required to have child resistant caps, and have a  ‘no-leak’ design.
  Leak proof packaging, according to Commission Implementing Decision (EU) 2016/586 of 14/03/2016, is a bottle:
‘…possessing a securely attached nozzle at least 9 mm long, which is narrower than and slots comfortably into the opening of the tank of the electronic cigarette with which it is used and possessing a flow control mechanism that emits no more than 20 drops of refill liquid per minute when placed vertically and subjected to atmospheric pressure alone at 20 °C ± 5 °C..’
Nicotine
E-liquids containing nicotine can only be sold in 10ml bottles and contain NO MORE nicotine than 20mg/ml. If you are accustomed to vaping at a strength of 20mg+ then you won’t be able to buy premixed juices at that strength after 20/05/17. Don’t think that this doesn’t affect you if you make your own e-juice at home. The nicotine you would usually buy to add to your PG/VG and concentrates is now also subject to the 20mg/ml, 10ml bottle rule, unless it has a medical licence.
Note: PG, VG, flavour concentrates and E-liquids containing no nicotine (0mg) are not subject to TPD and all of these are fine to sell in bottles larger than 10ml.
  Tanks, Clearomisers & Atomisers
Maximum tank capacity is now 2ml – yes that’s right, TWO millilitres! Read it and weep vaperinos and vaperinas!! That’s a lot of refilling if you vape at high wattages!
    Increased cost
Sadly TPD is going to come at a cost to everyone. This is not confined to those who are involved in the vaping industry, and not just to those people that currently vape. Prices of vaping products will most likely increase, and might well put current smokers off trying e-cigarettes as a means of nicotine replacement while they quit. Some current vapers may well see the increased costs of vaping as a reason to go back to smoking, however only time will tell. Every person who might have quit smoking using e-cigarettes but didn’t, because they were put off by the idea that vaping wouldn’t be cost effective, potentially represents one more life lost because of this counterintuitive, and poorly conceived legislation.
Why are there increased costs?
Manufacturers and retailers of vaping products have had to take on board a LOT of extra expenses due to TPD. For example every E-liquid containing nicotine had to be registered and tested before May 2016 and this was not cheap. A significant number of e-liquid manufacturers closed down as they could no longer afford to continue making e-liquid. Many had to drastically reduce their range of flavours and nicotine strengths. Each flavour/strength had to be registered and tested individually, and the manufacturers simply couldn’t afford to put all of their products through TPD compliancy testing. The e-liquid you see on the shelves post 20/05/2017 is only there because the manufacturer of that liquid went to a lot of time, trouble, and expense to remain TPD compliant.
Tanks are now required to have an ECID, and be registered on the MHRA website. They also need to undergo emissions testing to check that they are capable of delivering a consistent dosage of nicotine. Again, none of this is cheap. Also consider that the tanks needed a full design overhaul to comply with TPD in the first place.
Then there is packaging to take into consideration. Almost all packaging was redesigned to comply with the new regulations . E-Liquid packaging now needs to display a whole bunch of specific warnings. This is inspite of the fact that most good E-juice manufacturers already had their own perfectly appropriate warning labels in place. The bottles had to be redesigned, and all E-liquid must now come supplied in a box.
You’ll start to see a lot of scary warning labels when you go into your local vape store:
    Oddly enough compliant tanks, and mod kits containing tanks, require this label. Even though they don’t actually contain any nicotine until you add it. This has led to everything being repackaged. Of course, when it was noted that the new warnings effectively breached Trading Standards everything had to packaged AGAIN!
Trading Standards advised that a second warning is needed to warn the buyer that the first warning is essentially incorrect…still with me? So now the warning pictured above should be accompanied by the text:
Applies when the product is used with e-liquids containing nicotine.
  I seriously have no idea what to say about that, it’s too silly for words. I mean you couldn’t make this stuff up.
So that’s all for now, I’ll be coming back to this subject in the future when we’ve all spent some time under the TPD yoke. We will have a better idea of the consequences of these changes down the line. In the meantime, please enjoy this very highbrow inspirational poem I wrote especially for you!
The bell tolls for us all,
But we’re in this as one.
TPD’s here,
Face the music,
VAPE ON!
Sources:
http://ec.europa.eu/health//sites/health/files/tobacco/docs/dir_201440_en.pdf
https://www.gov.uk/guidance/e-cigarettes-regulations-for-consumer-products
https://chemnovatic.com/Files/COMMISSION%20IMPLEMENTING%20DECISION%20%28EU%29%202016-586.pdf
http://factsdomatter.co.uk/2016/05/04/the-impact-of-the-tpd/
http://www.luminarium.org/sevenlit/donne/meditation17.php
Featured Image:  http://bluucat.deviantart.com/art/ominous-clock-tower-254110930
0 notes