I have a very simple test case. I want to login and logout of a php application (SugarCRM) multiple times. I have successfully carried out a couple of basic tests but I don't seem to get the hang of it. A short tutorial or a link to carry out the above will surely be sufficient. Thanks for reading.
I believe that ASP.NET Login Testing with JMeter guide will provide comprehensive information on how to properly perform your testing. It shouldn't be any difference for any other backend technology stack (Java, PHP, Ruby, etc.) as JMeter is acting on protocol level and doesn't care about underlying implementation software for application under test and correlation and cookie management is standard for all web applications.
if you want to do multiple times then try to use CSV Data config and import data from csv to login multiple times.
Create a CSV file with un & pwd for multiple users and try to run the script. Remember the number of threads you give should be equal to the no. of users, so that it will login multiple times
Related
I am working on a desktop application with electron and I am considering online storage to store data. I would like to get some idea on the approach as I couldn't find reliable answers from google search.
Approach 1. electron app (front end ) + php (like purchase a hosting package from godaddy with a domain e.g: www.mysite.com)
with this approach I am planning to create api calls in php to perform basic CRUD.
is this is a good way?
will this affect the speed/load time?
are there better ways for this situation?
Thank you very much in advance for the help.
Well, this is not an easy topic. Your solution could work: you Electron app ask your server for data and store data to it. Anyway the best solution depends from your application.
The most important points that you have to ask yourself are:
How often do you need to reach your server ?
Your users could work without data from server ?
How long does it takes to read and store data on your server ? (it's different if you store some kb or many gb of data)
The data stored online must be shared with other users or every user has access to its own data ?
If all the information are stored in your server your startup have to wait for the request to complete but you can show a loader or something like this to mitigate the waiting.
In my opinion you have many choices, from the simplest (and slowest) to the most complex (but that mitigate network lag):
Simple AJAX requests to your server: as you described you will do some HTTP requests to your server and read and write data to be displayed on your application. Your user will have to wait for the requests to complete. Show them some loading animations to mitigate the wait.
There are some solutions that save the data locally to your electron installation and then sync them online, Have a check to PuchDB for an example
Recently I'm looking at GraphQL. GraphQL is an API to query your data. It's not that easy but it has some interesting features, it has an internal cache and it's already studied for optimistic update. You update your application immediately thinking that your POST will be fine and then if something goes wrong you update it accordingly.
I'd also like to suggest you to try some solutions as a service. You don't have a server already and you will have to open a new contract so why don't you check some dedicated service like Firebase? Google Firebase Realtime Database allows you to work in javascript (just one language involved in the project), sync your data online automatically and between devices without the need to write any webservice. I'have just played with it for some prototypes but it looks very interesting and it's cheap. It also has a free plan that it's enough for many users.
Keep in mind that if your user has access only to their data the fastest and easies solution is to use a database inside your electron application. A sqlite database, an IndexDB database or even serialize to JSON and then store everything in localstorage (if your data fits size limits).
Hope this helps
Need some web application performance measurement tool.. Can you guys suggest me some better ones..
Purpose: First, app is built on Lumen and Dashboard is built upon Laravel. So why I want something is to measure all requests performance to app and then I can to note down results of each and every requests' time consumption, based on that app can be optimized in better way
I did some google found JMeter is most of the people's choice, as its from apache and does the job but it looks lil complex, also found https://locust.io/ interesting, that I'm gonna give it a try
But I would more like to get experts suggestions or advice on this
Thanks!
There is quite a number of free load testing tools and the absolute majority of them supports HTTP protocol so feel free to choose any.
Regarding JMeter and Locust, if you can develop code in Python - go for Locust as you won't have to learn new things and will be able to start right away.
If your Python programming skills are not that good I would recommend reconsidering JMeter as it is not that complex at all:
JMeter is GUI based so you can create your test using mouse.
JMeter comes with HTTP(S) Test Script Recorder so you will be able to create test plan "skeleton" in few minutes using your favourite browser
JMeter supports way more protocols, i.e. you can load test databases via JDBC, mail servers via SMTP/IMAP/POP, MQ servers via JMS, etc. while Locust is more HTTP-oriented, if you need more - you have to code
If above points sound promising check out JMeter Academy - the fastest and the most efficient way of ramping up on JMeter as of now.
XHProf you can use it check every function exec time! it can show you with a web gui!
https://pecl.php.net/package/xhprof
XHProf is a function-level hierarchical profiler for PHP and has a simple HTML based navigational interface. The raw data collection component is implemented in C (as a PHP extension). The reporting/UI layer is all in PHP. It is capable of reporting function-level inclusive and exclusive wall times, memory usage, CPU times and number of calls for each function. Additionally, it supports ability to compare two runs (hierarchical DIFF reports), or aggregate results from multiple runs.
I have created a PHP+MYSQL web app and I am trying to implement now a logging system to store and track some actions of each user.
The purpose of this is the following: track the activity of each user's session by logging the IP+time+action, then see which pages he accessed later on by logging time+pagename; for each user there will be a file in the format: log{userid}_{month}.log
Each log will then be viewed only by the website owner, through a custom admin panel, and the data will be used only for security purposes (as in to show to the user if he logged in from a different IP or if someone else logged in from a different IP and to see which areas of the website the user accessed during his login session).
Currently, I have a MYSQL MyISAM table where I store the userid,IP,time,action and the app is still not launched, but we intend to have very many users (over 100k), and using a database for this solutions feels like suicide.
So what do you suggest? How should the logging be done? Using files, using a table in the current database, using a separate database? Are there any file-logging frameworks available for PHP?
How should the reading of the file be done then? Read the results by row?
Thank you
You have many options, so I'll speak from my experience running a startup with about 500k users, 100k active every month, which seems to be in your range.
We logged user actions in a MySQL database.
Querying your data is very easy and fast (provided good indexes)
We ran on Azure, and had a dedicated MySQL (with slaves, etc) for storing all user data, including logs. Space was not an issue.
Logging to MySQL can be slow, depending on everything you are logging, so we just pushed a log to Redis and had a Python app read it from Redis and insert into MySQL in the background. This made that logging basically had no impact on loading times.
We decided to log in MySQL for user actions because:
We wanted to run queries on anything at any time without much effort. The structured format of the user action logs made that incredibly easy to do.
It also allows you to display certain logs to users, if you would require it.
When we introduced badges, we had no need to parse text logs to award badges to those who performed a specific action X number of times. We simply wrote a query against the user action logs, and the badges were awarded. So adding features based on actions was easy as well.
We did use file logging for a couple of application logs - or things we did not query on a daily basis - such as the Python app writing to the database, Webserver access and error logs, etc.
We used Logstash to process those logs. It can simply hook into a log file and stream it to your Logstash server. Logstash can also query your logs, which is pretty cool.
Advanced uses
We used Slack for team communications and integrated the Python database writing app with it, this allowed us to send critical errors to a channel (via their API) where someone could action a fix immediately.
Closing
My suggestion would be to not over think it for now, log to MySQL, query and see the stats. Make updates, rinse and repeat. You want to keep the cycle between deploy and update quick, so making decisions from a quick SQL query makes it easy.
Basically what you want to avoid is logging into a server, finding a log and grep your way through it to find something, the above achieved that.
This is what we did, it is still running like that and we have no plans to change it soon. We haven't had any issues where we could not find anything that we needed. If there is a massive burst of users and we scale to 1mil monthly active users, then we might change it.
Please note: whichever way you decide to log, if you are saving the POST data, be sure to never do that for credit card info, unless you are compliant. Or rather use Stripe's JavaScript libraries.
If you are sure that reading the log will mainly target one user at a time, you should consider partioning your log table:
http://dev.mysql.com/doc/refman/5.1/en/partitioning-range.html
using your user_id as partitioning key.
Maximum number of partitions being 1024, you will have one partition storing 1/1000 of your 100k users, which is something reasonable.
Are there any file-logging frameworks available for PHP?
There is this which is available on packagist: https://packagist.org/packages/psr/log
Note that it's not a file logging framework but an API for a logger based on the PSR-3 standard from FIG. So, if you like, it's the "standard" logger interface for PHP. You can build a logger that implements this interface or search around on packagist for other loggers that implement that interface (either file or MySQL based). There are a few other loggers on packagist (teacup, forestry) but it would be preferable to use one that sticks to the PSR standard.
We do logging with the great tool Graylog.
It scales as high as you want it, has great tools on data visualization, is incredibly fast even for complex querys and huge datasets, and the underlying search-enginge (elasticsearch) is schemaless. The latter may be an advantage as you get more possibilities on extending your logs without the hassle mysql-schemas can give you.
Graylog, elasticsearch and mongodb (which is used as to save the configuration of graylog and its webinterface) are easily deployable via tools like puppet, chef and the like.
Actually logging to graylog is easy with the already mentioned php-lib monolog.
Of curse the great disadvantage here is that you have to learn a bunch of new tools and softwares. But it is worth it in my opinion.
The crux of the matter is the data you are writing is not going to be changed. In my experience in this scenario I would use either:
MySQL with a blackhole storage engine. Set it up right and its blisteringly fast!
Riak Cluster (NoSQL solution) - though this may be a learning curve for you it might be one you may need to eventually take anyway.
Use SysLog ;)
Set it up on another server and it can log all of your processes seperately (such as networking, servers, sql, apache, and your php).
It can be usefull for you and decreasing the time spend of debugging. :)
I wrote an application on VB6/Access for a retail shop almost 8 yrs ago. They are still using it, and now they are asking for changes/upgrade and want to access from multiple locations + multiple machine per location. Earlier it was just one machine per location.
All location is going to run the same application except only the Inventory and customers are different along with app settings. Inventory should be able to move to different location.
I lost touch with VB & Access, also I would like to rewrite the app with open source tools.
I'm a web developer PHP/MySQL and can do html5 if necessary. I believe I can rewrite all the functionalities with PHP/MySQL but I am not confident in printing.
The main requirement of the app is, it should print as fast as it can, should support several custom paper sizes.
Also the database should work distributed environment, all location should be able to work independently as well as able to sync updates when connected.
What is the best thing I can do in
this situation?
Would you recommend to create webapp, and do any desktop
client only for printing. i.e VB in
windows or shell script if linux? or
any alternative?
Any recommended workflow/links for Database setup/mirroring?
Modify the existing VB application to run with required MySQL architecture?
Sorry to violate one question per post rule, but I don't know how to split it.
Lets start with printing.
You could do a print CSS file. But its not very precise. That would get printed from the client browser.
Generate a PDF. With that you could print from the server or from the client. Server would be a faster option. Although multiple printers could get complicated.
Database sync:
I would treat the central database as a separate app and devise rules for each location to sync to the central location. You may not need to share all data, and just replicating the data you get into complex replication rules.
I'm building a web application, and I need to use an architecture that allows me to run it over two servers. The application scrapes information from other sites periodically, and on input from the end user. To do this I'm using Php+curl to scrape the information, Php or python to parse it and store the results in a MySQLDB.
Then I will use Python to run some algorithms on the data, this will happen both periodically and on input from the end user. I'm going to cache some of the results in the MySQL DB and sometimes if it is specific to the user, skip storing the data and serve it to the user.
I'm think of using Php for the website front end on a separate web server, running the Php spider, MySQL DB and python on another server.
What frame work(s) should I use for this kind of job? Is MVC and Cakephp a good solution? If so will I be able to control and monitor the Python code using it?
Thanks
How do go about implementing this?
Too big a question for an answer here. Certainly you don't want 2 sets of code for the scraping (1 for scheduled, 1 for demand) in addition to the added complication, you really don't want to be running job which will take an indefinite time to complete within the thread generated by a request to your webserver - user requests for a scrape should be run via the scheduling mechanism and reported back to users (although if necessary you could use Ajax polling to give the illusion that it's happening in the same thread).
What frame work(s) should I use?
Frameworks are not magic bullets. And you shouldn't be choosing a framework based primarily on the nature of the application you are writing. Certainly if specific, critical functionality is precluded by a specific framework, then you are using the wrong framework - but in my experience that has never been the case - you just need to write some code yourself.
using something more complex than a cron job
Yes, a cron job is probably not the right way to go for lots of reasons. If it were me I'd look at writing a daemon which would schedule scrapes (and accept connections from web page scripts to enqueue additional scrapes). But I'd run the scrapes as separate processes.
Is MVC a good architecture for this? (I'm new to MVC, architectures etc.)
No. Don't start by thinking whether a pattern fits the application - patterns are a useful tool for teaching but describe what code is not what it will be
(Your application might include some MVC patterns - but it should also include lots of other ones).
C.
I think you have already a clear Idea on how to organize your layers.
First of all you would need a Web Framework for your front-end.
You have many choices here, Cakephp afaik is a good choice and it is designed to force you to follow the design pattern MVC.
Then, you would need to design your database to store what users want to be spidered.
Your db will be accessed by your web application to store users requests, by your php script to know what to scrape and finally by your python batch to confirm to the users that the data requested is available.
A possible over-simplified scenario:
User register to your site
User commands to grab a random page from Wikipedia
Request is stored though CakePhp application to db
Cron php batch starts and checks db for new requests
Batch founds new request and scrapes from Wikipedia
Batch updates db with a scraped flag
Cron python batch starts and checks db for new scraped flag
Batch founds new scraped flag and parse Wikipedia to extract some tags
Batch updates db with a done flag
User founds the requested information on his profile.