Tumgik
#like. im coming into it having never seen python code and with very limited coding experience
monsterhugger · 2 years
Text
downloaded and figured out how to use renpy and now have absolutely no ideas for what to make w it
5 notes · View notes
st4r5sh4rdzxx · 1 year
Text
my website (neocities)
i realized i never talked about my website on here and i wanna change that, so here :)
i made it beginning in like march for a class because i basically just was interested in coding. because of all of my life, it always just seemed like some kind of magic. like how can you just type equations and the computer does all kinds of cools stuff?
granted i still dont fucking understand anything but im trying VERY hard. i researched python for a month and decided i was not getting anywhere because no matter how much time i spent dedicating myself to the tutorials, i still felt confused and lost, despite it being one of the easiest programming languages apparently.
so i went with html because hey ive seen people make their tumblrs look cool before so why not? it also feeds into my nostalgic interest for old web, back when the internet was free from corporate manipulation.
if ya wanna listen to me ramble on about coding more, read below the cut. and if you are desperate to share your site but have almost no one to share it with (like me :,D) feel free to reblog/reply with your url!
Tumblr media
i think it's basic to assume that any built-by-hand website is going to be a work in progress- under construction, if you will. same with mine. for a project in a different class, i utilized my limited coding skillset to actually make a game, and i still dont understand how i did it. i just remember spending at least a few days zoning out in my school's library, a tutor or friend sitting in front of me who would remind me to eat and stand up and walk around a few times each day. it was exhibited in a gallery for a weekend and it was truly insane.
for some background, i myself am an artist, currently pursuing my bachelors degree and i work multidisciplinary. i've had stuff exhibited in shows since i was in high school, granted they were small, or free to enter youth art shows. at this time i had grown pretty accustomed to gallery nights and my anxiety about it waned. but this was the first time in several years that i was genuinely so nervous about people's reactions to my work. i kept creeping back to where my work was displayed, anxiously watching people react to my game. in my personal opinion, it feels somewhat self-centered to stay by your artwork on gallery night during group shows- and yet, here i was displaying this bad habit.
working on games and websites is something i never planned to do. its honestly exhilarating to come across this new interest i never had before. i feel that my art is now able to take a new direction that im seriously excited to continue doing :)
2 notes · View notes
Text
four little words
Jimin x Reader - Part one
fluff/angst/smut
“Wait wait wait! Shhh”, the four little words that ruined y/n’s day.
Frozen in place y/n’s jaws hung loose as she gawked at the explicit scene before her, wrapped around each other like squirming pythons, barely clothed, was Jungkook and Sori. Catching her friends grinding on each other like animals in heat would’ve been nauseating enough, but they weren’t just two friends they were Jimin’s best friend and Jimin’s girlfriend.
“Y/N!” Sori gasped as Jungkook slobbered all over her neck, she grimaced, none the wiser he continued “ugh Jungkook st-stop!” she batted his shoulder and pushed him off of her, the boy turned his head in horror. The pair panted speechless at y/n, it was a stand off.
Eyes locked the pair scurried to pull their uniforms back on.
“Im going to spew” she breathed, her expression cold as she slowly backed out of the bathroom.
“y/n wait- “ Sori called out, but it was too late.
“fuck, fuck, fuck” y/n mumbled as she practically ran down the hallway, it was the middle of class so the halls were pretty deserted, the click of her heels hitting the linoleum floor echoed through out the school. I only wanted to have a bloody piss for fuck sake, why is it always me, she cursed herself, if only she waited till the end of class she wouldn’t be in this situation. Not knowing what to do or where to go y/n decided she needed some air.
“Right. Well this was a mistake” she hugged her blazer tight, she had forgotten it was the middle of October. Pacing around the small patio, careful to not walk in front of a classroom window and land herself in detention her mind filled with questions. How long? Since when? Do they care about each other? Like is it just sex? Who knows? So, Sori is sleeping with Jungkook, Sori is Jimins’ girlfriend, my friend, kind of, Jungkook is jimin’s friend, practically brother. No matter how many times  y/n tried to piece together the situation she still couldn’t believe it. 
“Fuck Sori!” y/n exclaimed. How could she do this to Jimin, everyone knows they aren’t really happy, thats not news. Sori uses Jimin for his status, she absolutely loves being THE park jimin’s girlfriend, it also helps that her father is big corporate exec. of some company who holds shares in BigHit. Our lives really are a corny drama, she rolled her eyes. Jungkook, that’ll be an even bigger blow to Jimin. Jesus Christ! How am I going to tell him? I cant tell him, I have to tell him!
y/n must have been pacing back and forth for at least fifteen minutes as the bell signalling the end of the period screeched through campus disrupting her train of thought.
“Crap” as a language and arts transfer student y/n had to maintain a pristine school record, pass all tests, attend all classes, extra curricular activities etc. Sure the pressure could be exhausting at times but the experience, the friends and life she had created for herself in Seoul was well worth it. She’d do it all again and twice as difficult in a heartbeat. 
“you caught them at it!’ Milly’s eyes bulged, the busy hallway muffled the exclusive conversation, “yes ! I feel sick I’ve been avoiding the boys all day! You know how bad I am with situations like these! I cannot keep secrets from people I get sick with the guilt” y/n groaned and slumped backwards into her locker, she shrugged off a few confused looks from passers by. Thankful for her bright bestfriend, Milly had swiped her belongs from class with out the professor noticing, even in the clear y/n would take detentions for the next year in exchange for the information she had just accidentally attained. Milly and y/n had an advantage over the other students, they weren’t native Koreans, English was their first language and provided a secret code. Despite some students knowing English most had limited understanding. However, there are always exceptions, students such as Namjoon, for example, the girls often forgot his knack for english, accidentally revealing very embarrassing comments  in his presence - which he then would translate for the rest of bangtan. Although, if they exaggerate their accents and slang they can just disguise their dialog.
“i know you are too much of a good person !” Mill teased, earning an eye roll. “No but seriously thats disgusting I never would of thought Jungkook would ever do that, poor jimin!
“Me either! And nether would Jimin” y/n replied
“where they fully going at it like..” Milly gave y/n a look everyone understood regardless of language barriers. “ oh god no! Thank god, now if I had seen that I would’ve plucked my own eyes out by now! Heavy petting only - belt buckles were undone but trousers were not down” They laughed for just a moment but couldn’t be cheerful under the circumstances.
“He was wondering where you went earlier, when you disappeared in class to go to the bathroom - its like he could sense something was wrong, he said you’d been gone too long” she mused stuffing books into her locker beside y/n’s head.
“ I cant bare to face him, which I never thought id say, cause he’s so pretty” Y/n’s tone warmed at the end of the phrase, she caught Milly smirk for a split second.
“he really is, shocking Sori would cheat on that beautiful sweet heart” she sighed with genuine sadness. Mill always knew the right thing to say, how to lighten a subject but not detract from it too much.
y/n fiddled anxiously with her skirt “I’m just not going to say anything for now, I need to think about what to do”.“Thats what id do, just take it in for now, don’t do anything rash…come on” Milly wrapped her arm around y/n’s shoulder and pulled her towards their next class.
28 notes · View notes
alanajacksontx · 5 years
Text
Using Python to recover SEO site traffic (Part three)
When you incorporate machine learning techniques to speed up SEO recovery, the results can be amazing.
This is the third and last installment from our series on using Python to speed SEO traffic recovery. In part one, I explained how our unique approach, that we call “winners vs losers” helps us quickly narrow down the pages losing traffic to find the main reason for the drop. In part two, we improved on our initial approach to manually group pages using regular expressions, which is very useful when you have sites with thousands or millions of pages, which is typically the case with ecommerce sites. In part three, we will learn something really exciting. We will learn to automatically group pages using machine learning.
As mentioned before, you can find the code used in part one, two and three in this Google Colab notebook.
Let’s get started.
URL matching vs content matching
When we grouped pages manually in part two, we benefited from the fact the URLs groups had clear patterns (collections, products, and the others) but it is often the case where there are no patterns in the URL. For example, Yahoo Stores’ sites use a flat URL structure with no directory paths. Our manual approach wouldn’t work in this case.
Fortunately, it is possible to group pages by their contents because most page templates have different content structures. They serve different user needs, so that needs to be the case.
How can we organize pages by their content? We can use DOM element selectors for this. We will specifically use XPaths.
For example, I can use the presence of a big product image to know the page is a product detail page. I can grab the product image address in the document (its XPath) by right-clicking on it in Chrome and choosing “Inspect,” then right-clicking to copy the XPath.
We can identify other page groups by finding page elements that are unique to them. However, note that while this would allow us to group Yahoo Store-type sites, it would still be a manual process to create the groups.
A scientist’s bottom-up approach
In order to group pages automatically, we need to use a statistical approach. In other words, we need to find patterns in the data that we can use to cluster similar pages together because they share similar statistics. This is a perfect problem for machine learning algorithms.
BloomReach, a digital experience platform vendor, shared their machine learning solution to this problem. To summarize it, they first manually selected cleaned features from the HTML tags like class IDs, CSS style sheet names, and the others. Then, they automatically grouped pages based on the presence and variability of these features. In their tests, they achieved around 90% accuracy, which is pretty good.
When you give problems like this to scientists and engineers with no domain expertise, they will generally come up with complicated, bottom-up solutions. The scientist will say, “Here is the data I have, let me try different computer science ideas I know until I find a good solution.”
One of the reasons I advocate practitioners learn programming is that you can start solving problems using your domain expertise and find shortcuts like the one I will share next.
Hamlet’s observation and a simpler solution
For most ecommerce sites, most page templates include images (and input elements), and those generally change in quantity and size.
I decided to test the quantity and size of images, and the number of input elements as my features set. We were able to achieve 97.5% accuracy in our tests. This is a much simpler and effective approach for this specific problem. All of this is possible because I didn’t start with the data I could access, but with a simpler domain-level observation.
I am not trying to say my approach is superior, as they have tested theirs in millions of pages and I’ve only tested this on a few thousand. My point is that as a practitioner you should learn this stuff so you can contribute your own expertise and creativity.
Now let’s get to the fun part and get to code some machine learning code in Python!
Collecting training data
We need training data to build a model. This training data needs to come pre-labeled with “correct” answers so that the model can learn from the correct answers and make its own predictions on unseen data.
In our case, as discussed above, we’ll use our intuition that most product pages have one or more large images on the page, and most category type pages have many smaller images on the page.
What’s more, product pages typically have more form elements than category pages (for filling in quantity, color, and more).
Unfortunately, crawling a web page for this data requires knowledge of web browser automation, and image manipulation, which are outside the scope of this post. Feel free to study this GitHub gist we put together to learn more.
Here we load the raw data already collected.
Feature engineering
Each row of the form_counts data frame above corresponds to a single URL and provides a count of both form elements, and input elements contained on that page.
Meanwhile, in the img_counts data frame, each row corresponds to a single image from a particular page. Each image has an associated file size, height, and width. Pages are more than likely to have multiple images on each page, and so there are many rows corresponding to each URL.
It is often the case that HTML documents don’t include explicit image dimensions. We are using a little trick to compensate for this. We are capturing the size of the image files, which would be proportional to the multiplication of the width and the length of the images.
We want our image counts and image file sizes to be treated as categorical features, not numerical ones. When a numerical feature, say new visitors, increases it generally implies improvement, but we don’t want bigger images to imply improvement. A common technique to do this is called one-hot encoding.
Most site pages can have an arbitrary number of images. We are going to further process our dataset by bucketing images into 50 groups. This technique is called “binning”.
Here is what our processed data set looks like.
Adding ground truth labels
As we already have correct labels from our manual regex approach, we can use them to create the correct labels to feed the model.
We also need to split our dataset randomly into a training set and a test set. This allows us to train the machine learning model on one set of data, and test it on another set that it’s never seen before. We do this to prevent our model from simply “memorizing” the training data and doing terribly on new, unseen data. You can check it out at the link given below:
Model training and grid search
Finally, the good stuff!
All the steps above, the data collection and preparation, are generally the hardest part to code. The machine learning code is generally quite simple.
We’re using the well-known Scikitlearn python library to train a number of popular models using a bunch of standard hyperparameters (settings for fine-tuning a model). Scikitlearn will run through all of them to find the best one, we simply need to feed in the X variables (our feature engineering parameters above) and the Y variables (the correct labels) to each model, and perform the .fit() function and voila!
Evaluating performance
After running the grid search, we find our winning model to be the Linear SVM (0.974) and Logistic regression (0.968) coming at a close second. Even with such high accuracy, a machine learning model will make mistakes. If it doesn’t make any mistakes, then there is definitely something wrong with the code.
In order to understand where the model performs best and worst, we will use another useful machine learning tool, the confusion matrix.
When looking at a confusion matrix, focus on the diagonal squares. The counts there are correct predictions and the counts outside are failures. In the confusion matrix above we can quickly see that the model does really well-labeling products, but terribly labeling pages that are not product or categories. Intuitively, we can assume that such pages would not have consistent image usage.
Here is the code to put together the confusion matrix:
Finally, here is the code to plot the model evaluation:
Resources to learn more
You might be thinking that this is a lot of work to just tell page groups, and you are right!
Mirko Obkircher commented in my article for part two that there is a much simpler approach, which is to have your client set up a Google Analytics data layer with the page group type. Very smart recommendation, Mirko!
I am using this example for illustration purposes. What if the issue requires a deeper exploratory investigation? If you already started the analysis using Python, your creativity and knowledge are the only limits.
If you want to jump onto the machine learning bandwagon, here are some resources I recommend to learn more:
Attend a Pydata event I got motivated to learn data science after attending the event they host in New York.
Hands-On Introduction To Scikit-learn (sklearn)
Scikit Learn Cheat Sheet
Efficiently Searching Optimal Tuning Parameters
If you are starting from scratch and want to learn fast, I’ve heard good things about Data Camp.
Got any tips or queries? Share it in the comments.
Hamlet Batista is the CEO and founder of RankSense, an agile SEO platform for online retailers and manufacturers. He can be found on Twitter @hamletbatista.
The post Using Python to recover SEO site traffic (Part three) appeared first on Search Engine Watch.
from IM Tips And Tricks https://searchenginewatch.com/2019/04/17/using-python-to-recover-seo-site-traffic-part-three/ from Rising Phoenix SEO https://risingphxseo.tumblr.com/post/184297809275
0 notes